Word Part 2 PDF
Document Details
Uploaded by SociableLapisLazuli3339
Butte College
Tags
Related
- Amazon AWS Exam Questions & Answers PDF
- AWS CSAA Practice Questions PDF (2020)
- AWS Certified Solutions Architect Associate Slides PDF
- AWS Cloud Practitioner Slides PDF
- AWS Certified Solutions Architect - Associate Master Cheat Sheet PDF
- AWS Certified Solutions Architect - Associate Exam Questions and Answers PDF
Summary
This document contains questions and answers about various AWS (Amazon Web Services) solutions, including migrating on-premises data centers to AWS, optimizing database costs for web applications, and utilizing storage solutions for different use cases. The document also covers topics such as data analytics, caching solutions, and security considerations in AWS.
Full Transcript
A company wants to migrate its on-premises data center to AWS. According to the company\'s compliance requirements, the company can use only the ap-northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet. Which solutions will meet these requirements? (Choose two...
A company wants to migrate its on-premises data center to AWS. According to the company\'s compliance requirements, the company can use only the ap-northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet. Which solutions will meet these requirements? (Choose two.) A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-northeast-3. B. Use rules in AWS WAF to prevent internet access. Deny access to all AWS Regions except apnortheast-3 in the AWS account settings. C. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet access. Deny access to all AWS Regions except ap-northeast-3. D. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the use of any AWS Region other than ap-northeast-3. E. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside of ap-northeast-3. Answer: AC ========== Explanation: [https://docs.aws.amazon.com/organizations/latest/userguide/orgs\_manage\_policies\_scps\_example s\_vpc.html\#example\_vpc\_2] Question: 152 ============= A company uses a three-tier web application to provide training to new employees. The application is accessed for only 12 hours every day. The company is using an Amazon RDS for MySQL DB instance to store information and wants to minimize costs. What should a solutions architect do to meet these requirements? A. Configure an IAM policy for AWS Systems Manager Session Manager. Create an IAM role for the policy. Update the trust relationship of the role. Set up automatic start and stop for the DB instance. B. Create an Amazon ElastiCache for Redis cache cluster that gives users the ability to access the data from the cache when the DB instance is stopped. Invalidate the cache after the DB instance is started. C. Launch an Amazon EC2 instance. Create an IAM role that grants access to Amazon RDS. Attach the role to the EC2 instance. Configure a cron job to start and stop the EC2 instance on the desired schedule. D. Create AWS Lambda functions to start and stop the DB instance. Create Amazon EventBridge (Amazon CloudWatch Events) scheduled rules to invoke the Lambda functions. Configure the Lambda functions as event targets for the rules Answer: D ========= Explanation: In a typical development environment, dev and test databases are mostly utilized for 8 hours a day and sit idle when not in use. However, the databases are billed for the compute and storage costs during this idle time. To reduce the overall cost, Amazon RDS allows instances to be stopped temporarily. While the instance is stopped, you're charged for storage and backups, but not for the DB instance hours. Please note that a stopped instance will automatically be started after 7 days. This post presents a solution using AWS Lambda and Amazon EventBridge that allows you to schedule a Lambda function to stop and start the idle databases with specific tags to save on compute costs. The second post presents a solution that accomplishes stop and start of the idle Amazon RDS databases using AWS Systems Manager. Question: 153 ============= A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at least 128 KB in size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company needs to save money on storage while keeping the most accessed files readily available for its users. Which action should the company take to meet these requirements MOST cost-effectively? A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects. B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days. C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days. D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 StandardInfrequent Access (S3 Standard-1A) after 90 days. Answer: D ========= Explanation: This solution meets the requirements of saving money on storage while keeping the most accessed files readily available for the users. S3 Lifecycle policy can automatically move objects from one storage class to another based on predefined rules. S3 Standard-IA is a lower-cost storage class for data that is accessed less frequently, but requires rapid access when needed. It is suitable for ringtones older than 90 days that are downloaded infrequently. Option A is incorrect because configuring S3 Standard-IA for the initial storage tier of the objects can incur higher costs for frequent access and retrieval fees. Option B is incorrect because moving the files to S3 Intelligent-Tiering can incur additional monitoring and automation fees that may not be necessary for ringtones older than 90 days. Option C is incorrect because using S3 inventory to manage objects and move them to S3 Standard-IA can be complex and time-consuming, and it does not provide automatic cost savings. Reference: https://aws.amazon.com/s3/storage-classes/ https://aws.amazon.com/s3/cloud-storage-cost-optimization-ebook/ Question: 154 ============= A company needs to save the results from a medical trial to an Amazon S3 repository. The repository must allow a few scientists to add new files and must restrict all other users to read-only access. No users can have the ability to modify or delete any files in the repository. The company must keep every file in the repository for a minimum of 1 year after its creation date. Which solution will meet these requirements? A. Use S3 Object Lock In governance mode with a legal hold of 1 year B. Use S3 Object Lock in compliance mode with a retention period of 365 days. C. Use an IAM role to restrict all users from deleting or changing objects in the S3 bucket Use an S3 bucket policy to only allow the IAM role D. Configure the S3 bucket to invoke an AWS Lambda function every tune an object is added Configure the function to track the hash of the saved object to that modified objects can be marked accordingly Answer: B ========= Explanation: In compliance mode, a protected object version can\'t be overwritten or deleted by any user, including the root user in your AWS account. When an object is locked in compliance mode, its retention mode can\'t be changed, and its retention period can\'t be shortened. Compliance mode helps ensure that an object version can\'t be overwritten or deleted for the duration of the retention period. In governance mode, users can\'t overwrite or delete an object version or alter its lock settings unless they have special permissions. With governance mode, you protect objects against being deleted by most users, but you can still grant some users permission to alter the retention settings or delete the object if necessary. In Governance mode, Objects can be deleted by some users with special permissions, this is against the requirement. Compliance: - Object versions can\'t be overwritten or deleted by any user, including the root user - Objects retention modes can\'t be changed, and retention periods can\'t be shortened Governance: - Most users can\'t overwrite or delete an object version or alter its lock settings - Some users have special permissions to change the retention or delete the object Question: 155 ============= A large media company hosts a web application on AWS. The company wants to start caching confidential media files so that users around the world will have reliable access to the files. The content is stored in Amazon S3 buckets. The company must deliver the content quickly, regardless of where the requests originate geographically. Which solution will meet these requirements? A. Use AWS DataSync to connect the S3 buckets to the web application. B. Deploy AWS Global Accelerator to connect the S3 buckets to the web application. C. Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers. D. Use Amazon Simple Queue Service (Amazon SQS) to connect the S3 buckets to the web application. Answer: C ========= Explanation: CloudFront uses a local cache to provide the response, AWS Global accelerator proxies requests and connects to the application all the time for the response. [https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-contentrestricting-access-to-s3.html\#private-content-granting-permissions-to-oai] Question: 156 ============= A company produces batch data that comes from different databases. The company also produces live stream data from network sensors and application APIs. The company needs to consolidate all the data into one place for business analytics. The company needs to process the incoming data and then stage the data in different Amazon S3 buckets. Teams will later run one-time queries and import the data into a business intelligence tool to show key performance indicators (KPIs). Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.) A. Use Amazon Athena foe one-time queries Use Amazon QuickSight to create dashboards for KPIs B. Use Amazon Kinesis Data Analytics for one-time queries Use Amazon QuickSight to create dashboards for KPIs C. Create custom AWS Lambda functions to move the individual records from me databases to an Amazon Redshift duster D. Use an AWS Glue extract transform, and toad (ETL) job to convert the data into JSON format Load the data into multiple Amazon OpenSearch Service (Amazon Elasticsearch Service) dusters E. Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake Use AWS Glue to crawl the source extract the data and load the data into Amazon S3 in Apache Parquet format Answer: AE ========== Explanation: Amazon Athena is the best choice for running one-time queries on streaming data. Although Amazon Kinesis Data Analytics provides an easy and familiar standard SQL language to analyze streaming data in real-time, it is designed for continuous queries rather than one-time queries\[1\]. On the other hand, Amazon Athena is a serverless interactive query service that allows querying data in Amazon S3 using SQL. It is optimized for ad-hoc querying and is ideal for running one-time queries on streaming data\[2\].AWS Lake Formation uses as a central place to have all your data for analytics purposes (E). Athena integrate perfect with S3 and can makes queries (A). Question: 157 ============= A gaming company has a web application that displays scores. The application runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS for MySQL database. Users are starting to experience long delays and interruptions that are caused by database read performance. The company wants to improve the user experience while minimizing changes to the application\'s architecture. What should a solutions architect do to meet these requirements? A. Use Amazon ElastiCache in front of the database. B. Use RDS Proxy between the application and the database. C. Migrate the application from EC2 instances to AWS Lambda. D. Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB. Answer: A ========= Explanation: ElastiCache can help speed up the read performance of the database by caching frequently accessed data, reducing latency and allowing the application to access the data more quickly. This solution requires minimal modifications to the current architecture, as ElastiCache can be used in conjunction with the existing Amazon RDS for MySQL database. Question: 158 ============= A business\'s backup data totals 700 terabytes (TB) and is kept in network attached storage (NAS) at its data center. This backup data must be available in the event of occasional regulatory inquiries and preserved for a period of seven years. The organization has chosen to relocate its backup data from its on-premises data center to Amazon Web Services (AWS). Within one month, the migration must be completed. The company\'s public internet connection provides 500 Mbps of dedicated capacity for data transport. What should a solutions architect do to ensure that data is migrated and stored at the LOWEST possible cost? A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive. B. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3 Glacier. C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive. D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises NAS storage to Amazon S3 Glacier. Answer: A ========= Explanation: [https://www.omnicalculator.com/other/data-transfer] Question: 159 ============= A company wants to direct its users to a backup static error page if the company\'s primary website is unavailable. The primary website\'s DNS records are hosted in Amazon Route 53. The domain is pointing to an Application Load Balancer (ALB). The company needs a solution that minimizes changes and infrastructure overhead. Which solution will meet these requirements? A. Update the Route 53 records to use a latency routing policy. Add a static error page that is hosted in an Amazon S3 bucket to the records so that the traffic is sent to the most responsive endpoints. B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is hosted in an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy. C. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a static error page as endpoints. Configure Route 53 to send requests to the instance only if the health checks fail for the ALB. D. Update the Route 53 records to use a multivalue answer routing policy. Create a health check. Direct traffic to the website if the health check passes. Direct traffic to a static error page that is hosted in Amazon S3 if the health check does not pass. Answer: B ========= Explanation: This solution meets the requirements of directing users to a backup static error page if the primary website is unavailable, minimizing changes and infrastructure overhead. Route 53 active-passive failover configuration can route traffic to a primary resource when it is healthy or to a secondary resource when the primary resource is unhealthy. Route 53 health checks can monitor the health of the ALB endpoint and trigger the failover when needed. The static error page can be hosted in an S3 bucket that is configured as a website, which is a simple and cost-effective way to serve static content. Option A is incorrect because using a latency routing policy can route traffic based on the lowest network latency for users, but it does not provide failover functionality. Option C is incorrect because using an active-active configuration with the ALB and an EC2 instance can increase the infrastructure overhead and complexity, and it does not guarantee that the EC2 instance will always be healthy. Option D is incorrect because using a multivalue answer routing policy can return multiple values for a query, but it does not provide failover functionality. Reference: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-failover.html https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html Question: 160 ============= A corporation has recruited a new cloud engineer who should not have access to the CompanyConfidential Amazon S3 bucket. The cloud engineer must have read and write permissions on an S3 bucket named AdminTools. Which IAM policy will satisfy these criteria? A. B. ![](media/image2.jpg) C. D. Answer: A ========= Explanation: [https://docs.amazonaws.cn/en\_us/IAM/latest/UserGuide/reference\_policies\_examples\_s3\_rwbucket.html] The policy is separated into two parts because the ListBucket action requires permissions on the bucket while the other actions require permissions on the objects in the bucket. You must use two different Amazon Resource Names (ARNs) to specify bucket-level and object-level permissions. The first Resource element specifies arn:aws:s3:::AdminTools for the ListBucket action so that applications can list all objects in the AdminTools bucket. Question: 161 ============= A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege. Which steps should the solutions architect do in conjunction to reach this goal? (Select two.) A. Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations. B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached. C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached. D. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only. E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role. Answer: DE ========== Explanation: [https://docs.aws.amazon.com/IAM/latest/UserGuide/id\_roles.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id\_users.html] Question: 162 ============= A company runs a high performance computing (HPC) workload on AWS. The workload required lowlatency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options. What should a solutions architect propose to improve the performance of the workload? A. Choose a cluster placement group while launching Amazon EC2 instances. B. Choose dedicated instance tenancy while launching Amazon EC2 instances. C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances. D. Choose the required capacity reservation while launching Amazon EC2 instances. Answer: A ========= Explanation: [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2placementgroup.html] : \"A cluster placement group is a logical grouping of instances within a single Availability Zone that benefit from low network latency, high network throughput\" Question: 163 ============= A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application resides in the company\'s data center. The application recently experienced data loss after a database server crashed because of an unexpected power outage. The company needs a solution that avoids any single points of failure. The solution must give the application the ability to scale to meet user demand. Which solution will meet these requirements? A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a Multi-AZ configuration. B. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zone. Deploy the database on an EC2 instance. Enable EC2 Auto Recovery. C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance with a read replica in a single Availability Zone. Promote the read replica to replace the primary DB instance if the primary DB instance fails. D. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones Deploy the primary and secondary database servers on EC2 instances across multiple Availability Zones Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage between the instances. Answer: A ========= Explanation: Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a Multi-AZ configuration. To make an existing application highly available and resilient while avoiding any single points of failure and giving the application the ability to scale to meet user demand, the best solution would be to deploy the application servers using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones and use an Amazon RDS DB instance in a Multi-AZ configuration. By using an Amazon RDS DB instance in a Multi-AZ configuration, the database is automatically replicated across multiple Availability Zones, ensuring that the database is highly available and can withstand the failure of a single Availability Zone. This provides fault tolerance and avoids any single points of failure. Question: 164 ============= A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in the AWS Cloud. The application will transmit data by using UDP packets. The company wants to ensure that the application can scale out and in as traffic increases and decreases. What should a solutions architect do to meet these requirements? A. Attach a Network Load Balancer to the Auto Scaling group B. Attach an Application Load Balancer to the Auto Scaling group. C. Deploy an Amazon Route 53 record set with a weighted policy to route traffic appropriately D. Deploy a NAT instance that is configured with port forwarding to the EC2 instances in the Auto Scaling group. Answer: A ========= Explanation: This solution meets the requirements of running a gaming application that transmits data by using UDP packets and scaling out and in as traffic increases and decreases. A Network Load Balancer can handle millions of requests per second while maintaining high throughput at ultra low latency, and it supports both TCP and UDP protocols. An Auto Scaling group can automatically adjust the number of EC2 instances based on the demand and the scaling policies. Option B is incorrect because an Application Load Balancer does not support UDP protocol, only HTTP and HTTPS. Option C is incorrect because Amazon Route 53 is a DNS service that can route traffic based on different policies, but it does not provide load balancing or scaling capabilities. Option D is incorrect because a NAT instance is used to enable instances in a private subnet to connect to the internet or other AWS services, but it does not provide load balancing or scaling capabilities. Reference: https://aws.amazon.com/blogs/aws/new-udp-load-balancing-for-network-load-balancer/ https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html Question: 165 ============= A solutions architect is designing a customer-facing application for a company. The application\'s database will have a clearly defined access pattern throughout the year and will have a variable number of reads and writes that depend on the time of year. The company must retain audit records for the database for 7 days. The recovery point objective (RPO) must be less than 5 hours. Which solution meets these requirements? A. Use Amazon DynamoDB with auto scaling Use on-demand backups and Amazon DynamoDB Streams B. Use Amazon Redshift. Configure concurrency scaling. Activate audit logging. Perform database snapshots every 4 hours. C. Use Amazon RDS with Provisioned IOPS Activate the database auditing parameter Perform database snapshots every 5 hours D. Use Amazon Aurora MySQL with auto scaling. Activate the database auditing parameter Answer: A ========= Explanation: This solution meets the requirements of a customer-facing application that has a clearly defined access pattern throughout the year and a variable number of reads and writes that depend on the time of year. Amazon DynamoDB is a fully managed NoSQL database service that can handle any level of request traffic and data size. DynamoDB auto scaling can automatically adjust the provisioned read and write capacity based on the actual workload. DynamoDB on-demand backups can create full backups of the tables for data protection and archival purposes. DynamoDB Streams can capture a time-ordered sequence of item-level modifications in the tables for audit purposes. Option B is incorrect because Amazon Redshift is a data warehouse service that is designed for analytical workloads, not for customer-facing applications. Option C is incorrect because Amazon RDS with Provisioned IOPS can provide consistent performance for relational databases, but it may not be able to handle unpredictable spikes in traffic and data size. Option D is incorrect because Amazon Aurora MySQL with auto scaling can provide high performance and availability for relational databases, but it does not support audit logging as a parameter. Reference: https://aws.amazon.com/dynamodb/ https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html Question: 166 ============= A company hosts a two-tier application on Amazon EC2 instances and Amazon RDS. The application\'s demand varies based on the time of day. The load is minimal after work hours and on weekends. The EC2 instances run in an EC2 Auto Scaling group that is configured with a minimum of two instances and a maximum of five instances. The application must be available at all times, but the company is concerned about overall cost. Which solution meets the availability requirement MOST cost-effectively? A. Use all EC2 Spot Instances. Stop the RDS database when it is not in use. B. Purchase EC2 Instance Savings Plans to cover five EC2 instances. Purchase an RDS Reserved DB Instance C. Purchase two EC2 Reserved Instances Use up to three additional EC2 Spot Instances as needed. Stop the RDS database when it is not in use. D. Purchase EC2 Instance Savings Plans to cover two EC2 instances. Use up to three additional EC2 On-Demand Instances as needed. Purchase an RDS Reserved DB Instance. Answer: C ========= Explanation: This solution meets the requirements of a two-tier application that has a variable demand based on the time of day and must be available at all times, while minimizing the overall cost. EC2 Reserved Instances can provide significant savings compared to On-Demand Instances for the baseline level of usage, and they can guarantee capacity reservation when needed. EC2 Spot Instances can provide up to 90% savings compared to On-Demand Instances for any additional capacity that the application needs during peak hours. Spot Instances are suitable for stateless applications that can tolerate interruptions and can be replaced by other instances. Stopping the RDS database when it is not in use can reduce the cost of running the database tier. Option A is incorrect because using all EC2 Spot Instances can affect the availability of the application if there are not enough spare capacity or if the Spot price exceeds the maximum price. Stopping the RDS database when it is not in use can reduce the cost of running the database tier, but it can also affect the availability of the application. Option B is incorrect because purchasing EC2 Instance Savings Plans to cover five EC2 instances can lock in a fixed amount of compute usage per hour, which may not match the actual usage pattern of the application. Purchasing an RDS Reserved DB Instance can provide savings for the database tier, but it does not allow stopping the database when it is not in use. Option D is incorrect because purchasing EC2 Instance Savings Plans to cover two EC2 instances can lock in a fixed amount of compute usage per hour, which may not match the actual usage pattern of the application. Using up to three additional EC2 On-Demand Instances as needed can incur higher costs than using Spot Instances. Reference: https://aws.amazon.com/ec2/pricing/reserved-instances/ https://aws.amazon.com/ec2/spot/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER\_StopInstance.html Question: 167 ============= A company has an ecommerce checkout workflow that writes an order to a database and calls a service to process the payment. Users are experiencing timeouts during the checkout process. When users resubmit the checkout form, multiple unique orders are created for the same desired transaction. How should a solutions architect refactor this workflow to prevent the creation of multiple orders? A. Configure the web application to send an order message to Amazon Kinesis Data Firehose. Set the payment service to retrieve the message from Kinesis Data Firehose and process the order. B. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application path request Use Lambda to query the database, call the payment service, and pass in the order information. C. Store the order in the database. Send a message that includes the order number to Amazon Simple Notification Service (Amazon SNS). Set the payment service to poll Amazon SNS. retrieve the message, and process the order. D. Store the order in the database. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the payment service to retrieve the message and process the order. Delete the message from the queue. Answer: D ========= Explanation: This approach ensures that the order creation and payment processing steps are separate and atomic. By sending the order information to an SQS FIFO queue, the payment service can process the order one at a time and in the order they were received. If the payment service is unable to process an order, it can be retried later, preventing the creation of multiple orders. The deletion of the message from the queue after it is processed will prevent the same message from being processed multiple times. Question: 168 ============= A company is planning to build a high performance computing (HPC) workload as a service solution that Is hosted on AWS A group of 16 AmazonEC2Ltnux Instances requires the lowest possible latency for node-to-node communication. The instances also need a shared block device volume for highperforming storage. Which solution will meet these requirements? A. Use a duster placement group. Attach a single Provisioned IOPS SSD Amazon Elastic Block Store (Amazon E BS) volume to all the instances by using Amazon EBS Multi-Attach B. Use a cluster placement group. Create shared \'lie systems across the instances by using Amazon Elastic File System (Amazon EFS) C. Use a partition placement group. Create shared tile systems across the instances by using Amazon Elastic File System (Amazon EFS). D. Use a spread placement group. Attach a single Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume to all the instances by using Amazon EBS Multi-Attach Answer: A ========= Explanation: 1. lowest possible latency + node to node ==\> cluster placement(must be within one AZ), so C, D out 2. For EBS Multi-Attach, up to 16 instances can be attached to a single volume==\>we have 16 linux instance==\>more close to A 3. \"need a shared block device volume\"==\>EBS Multi-attach is Block Storage whereas EFS is File Storage==\> B out 4. EFS automatically replicates data within and across 3 AZ==\>we use cluster placement so all EC2 are within one AZ. 5. EBS Multi-attach volumes can be used for clients within a single AZ. [https://repost.aws/questions/QUK2RANw1QTKCwpDUwCCI72A/efs-vs-ebs-mult-attach] Question: 169 ============= A company has an event-driven application that invokes AWS Lambda functions up to 800 times each minute with varying runtimes. The Lambda functions access data that is stored in an Amazon Aurora MySQL OB cluster. The company is noticing connection timeouts as user activity increases The database shows no signs of being overloaded. CPU. memory, and disk access metrics are all low. Which solution will resolve this issue with the LEAST operational overhead? A. Adjust the size of the Aurora MySQL nodes to handle more connections. Configure retry logic in the Lambda functions for attempts to connect to the database B. Set up Amazon ElastiCache tor Redls to cache commonly read items from the database. Configure the Lambda functions to connect to ElastiCache for reads. C. Add an Aurora Replica as a reader node. Configure the Lambda functions to connect to the reader endpoint of the OB cluster rather than lo the writer endpoint. D. Use Amazon ROS Proxy to create a proxy. Set the DB cluster as the target database Configure the Lambda functions lo connect to the proxy rather than to the DB cluster. Answer: D ========= Explanation: 1\. database shows no signs of being overloaded. CPU, memory, and disk access metrics are all low==\>A and C out. We cannot only add nodes instance or add read replica, because database workload is totally fine, very low. 2. \"least operational overhead\"==\>B out, because b need to configure lambda. 3. ROS proxy: Shares infrequently used connections; High availability with failover; Drives increased efficiency==\>proxy can leverage failover to redirect traffic from timeout rds instance to healthy rds instance. So D is right. Question: 170 ============= A company is building a containerized application on premises and decides to move the application to AWS. The application will have thousands of users soon after li is deployed. The company Is unsure how to manage the deployment of containers at scale. The company needs to deploy the containerized application in a highly available architecture that minimizes operational overhead. Which solution will meet these requirements? A. Store container images In an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch type to run the containers. Use target tracking to scale automatically based on demand. B. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the Amazon EC2 launch type to run the containers. Use target tracking to scale automatically based on demand. C. Store container images in a repository that runs on an Amazon EC2 instance. Run the containers on EC2 instances that are spread across multiple Availability Zones. Monitor the average CPU utilization in Amazon CloudWatch. Launch new EC2 instances as needed D. Create an Amazon EC2 Amazon Machine Image (AMI) that contains the container image Launch EC2 Instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon CloudWatch alarm to scale out EC2 instances when the average CPU utilization threshold is breached. Answer: A ========= Explanation: AWS Fargate is a serverless experience for user applications, allowing the user to concentrate on building applications instead of configuring and managing servers. Fargate also automates resource management, allowing users to easily scale their applications in response to demand. Question: 171 ============= A company\'s application Is having performance issues The application staleful and needs to complete m-memory tasks on Amazon EC2 instances. The company used AWS CloudFormation to deploy infrastructure and used the M5 EC2 Instance family As traffic increased, the application performance degraded Users are reporting delays when the users attempt to access the application. Which solution will resolve these issues in the MOST operationally efficient way? A. Replace the EC2 Instances with T3 EC2 instances that run in an Auto Scaling group. Made the changes by using the AWS Management Console. B. Modify the CloudFormation templates to run the EC2 instances in an Auto Scaling group. Increase the desired capacity and the maximum capacity of the Auto Scaling group manually when an increase is necessary C. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Use Amazon CloudWatch built-in EC2 memory metrics to track the application performance for future capacity planning. D. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Deploy the Amazon CloudWatch agent on the EC2 instances to generate custom application latency metrics for future capacity planning. Answer: D ========= Explanation: [https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-memory-metrics-ec2/] Question: 172 ============= An ecommerce company has an order-processing application that uses Amazon API Gateway and an AWS Lambda function. The application stores data in an Amazon Aurora PostgreSQL database. During a recent sales event, a sudden surge in customer orders occurred. Some customers experienced timeouts and the application did not process the orders of those customers A solutions architect determined that the CPU utilization and memory utilization were high on the database because of a large number of open connections The solutions architect needs to prevent the timeout errors while making the least possible changes to the application. Which solution will meet these requirements? A. Configure provisioned concurrency for the Lambda function Modify the database to be a global database in multiple AWS Regions B. Use Amazon RDS Proxy to create a proxy for the database Modify the Lambda function to use the RDS Proxy endpoint instead of the database endpoint C. Create a read replica for the database in a different AWS Region Use query string parameters in API Gateway to route traffic to the read replica D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS\| Modify the Lambda function to use the OynamoDB table Answer: B ========= Explanation: Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. [https://aws.amazon.com/id/rds/proxy/] Question: 173 ============= A company runs a global web application on Amazon EC2 instances behind an Application Load Balancer The application stores data in Amazon Auror a\. The company needs to create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss. The solution does not need to handle the load when the primary infrastructure is healthy What should a solutions architect do to meet these requirements? A. Deploy the application with the required infrastructure elements in place Use Amazon Route 53 to configure active-passive failover Create an Aurora Replica in a second AWS Region B. Host a scaled-down deployment of the application in a second AWS Region Use Amazon Route 53 to configure active-active failover Create an Aurora Replica in the second Region C. Replicate the primary infrastructure in a second AWS Region Use Amazon Route 53 to configure active-active failover Create an Aurora database that is restored from the latest snapshot D. Back up data with AWS Backup Use the backup to create the required infrastructure in a second AWS Region Use Amazon Route 53 to configure active-passive failover Create an Aurora second primary instance in the second Region Answer: A ========= Explanation: [https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html] Question: 174 ============= A company wants to measure the effectiveness of its recent marketing campaigns. The company performs batch processing on csv files of sales data and stores the results in an Amazon S3 bucket once every hour. The S3 bi petabytes of objects. The company runs one-time queries in Amazon Athena to determine which products are most popular on a particular date for a particular region Queries sometimes fail or take longer than expected to finish. Which actions should a solutions architect take to improve the query performance and reliability? (Select TWO.) A. Reduce the S3 object sizes to less than 126 MB B. Partition the data by date and region n Amazon S3 C. Store the files as large, single objects in Amazon S3. D. Use Amazon Kinosis Data Analytics to run the Queries as pan of the batch processing operation E. Use an AWS duo extract, transform, and load (ETL) process to convert the csv files into Apache Parquet format. Answer: B, E ============ Explanation: [https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-tips-for-amazon-athena/] This solution meets the requirements of measuring the effectiveness of marketing campaigns by performing batch processing on csv files of sales data and storing the results in an Amazon S3 bucket once every hour. An AWS duo ETL process can use services such as AWS Glue or AWS Data Pipeline to extract data from S3, transform it into a more efficient format such as Apache Parquet, and load it back into S3. Apache Parquet is a columnar storage format that can improve the query performance and reliability of Athena by reducing the amount of data scanned, improving compression ratio, and enabling predicate pushdown. Question: 175 ============= A company is running several business applications in three separate VPCs within me us-east-1 Region. The applications must be able to communicate between VPCs. The applications also must be able to consistently send hundreds to gigabytes of data each day to a latency-sensitive application that runs in a single on-premises data center. A solutions architect needs to design a network connectivity solution that maximizes costeffectiveness Which solution moots those requirements? A. Configure three AWS Site-to-Site VPN connections from the data center to AWS Establish connectivity by configuring one VPN connection for each VPC B. Launch a third-party virtual network appliance in each VPC Establish an iPsec VPN tunnel between the Data center and each virtual appliance C. Set up three AWS Direct Connect connections from the data center to a Direct Connect gateway in us-east-1 Establish connectivity by configuring each VPC to use one of the Direct Connect connections D. Set up one AWS Direct Connect connection from the data center to AWS. Create a transit gateway, and attach each VPC to the transit gateway. Establish connectivity between the Direct Connect connection and the transit gateway. Answer: D ========= Explanation: [https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-directconnect-aws-transit-gateway.html] Question: 176 ============= A company\'s web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only. Which configuration will meet this requirement? A. Configure the security group for the EC2 instances. B. Configure the security group on the Application Load Balancer. C. Configure AWS WAF on the Application Load Balancer in a VPC. D. Configure the network ACL for the subnet that contains the EC2 instances. Answer: C ========= Explanation: [https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographicmatch/] Question: 177 ============= A company has a mulli-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an Application Load Balancer (ALB). A solutions architect needs lo modify the infrastructure to be highly available without modifying the application. Which architecture should the solutions architect choose that provides high availability? A. Create an Auto Scaling group that uses three Instances across each of tv/o Regions. B. Modify the Auto Scaling group to use three instances across each of two Availability Zones. C. Create an Auto Scaling template that can be used to quickly create more instances in another Region. D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier. Answer: B ========= Explanation: High availability can be enabled for this architecture quite simply by modifying the existing Auto Scaling group to use multiple availability zones. The ASG will automatically balance the load so you don\'t actually need to specify the instances per AZ. Question: 178 ============= Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from users around the world. The files are stored In an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution. Which action should the solutions architect take to accomplish this? A. Generate presigned URLs for the files. B. Use cross-Region replication to all Regions. C. Use the geoproximtty feature of Amazon Route 53. D. Use Amazon CloudFront with the S3 bucket as its origin. Answer: D ========= Explanation: Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML pages, images, and videos. By using CloudFront, the HTML pages will be served to users from the edge location that is closest to them, resulting in faster delivery and a better user experience. CloudFront can also handle the high traffic and large number of requests expected for the global event, ensuring that the HTML pages are available and accessible to users around the world. Question: 179 ============= A company runs an application using Amazon ECS. The application creates esi/ed versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3. How can a solutions architect ensure that the application has permission to access Amazon S3? A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container. B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleAm in the task definition. C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster. D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account. Answer: B ========= Explanation: [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecstaskdefinition.html] Question: 180 ============= A solutions architect needs to securely store a database user name and password that an application uses to access an Amazon RDS DB instance. The application that accesses the database runs on an Amazon EC2 instance. The solutions architect wants to create a secure parameter in AWS Systems Manager Parameter Store. What should the solutions architect do to meet this requirement? A. Create an IAM role that has read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service (AWS KMS) key that is used to encrypt the parameter. Assign this IAM role to the EC2 instance. B. Create an IAM policy that allows read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service (AWS KMS) key that is used to encrypt the parameter. Assign this IAM policy to the EC2 instance. C. Create an IAM trust relationship between the Parameter Store parameter and the EC2 instance. Specify Amazon RDS as a principal in the trust policy. D. Create an IAM trust relationship between the DB instance and the EC2 instance. Specify Systems Manager as a principal in the trust policy. Answer: B ========= Explanation: [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference\_aws-services-that-work-withiam.html] Question: 181 ============= An entertainment company is using Amazon DynamoDB to store media metadat a\. The application is read intensive and experiencing delays. The company does not have staff to handle additional operational overhead and needs to improve the performance efficiency of DynamoDB without reconfiguring the application. What should a solutions architect recommend to meet this requirement? A. Use Amazon ElastiCache for Redis. B. Use Amazon DynamoDB Accelerator (DAX). C. Replicate data by using DynamoDB global tables. D. Use Amazon ElastiCache for Memcached with Auto Discovery enabled. Answer: B ========= Explanation: [https://aws.amazon.com/dynamodb/dax/] Question: 182 ============= A security team wants to limit access to specific services or actions in all of the team\'s AWS accounts. All accounts belong to a large organization in AWS Organizations. The solution must be scalable and there must be a single point where permissions can be maintained. What should a solutions architect do to accomplish this? A. Create an ACL to provide access to the services or actions. B. Create a security group to allow accounts and attach it to user groups. C. Create cross-account roles in each account to deny access to the services or actions. D. Create a service control policy in the root organizational unit to deny access to the services or actions. Answer: D ========= Explanation: Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization\'s access control guidelines. See [https://docs.aws.amazon.com/organizations/latest/userguide/orgs\_manage\_policies\_scp.html]. Question: 183 ============= A company is concerned about the security of its public web application due to recent web attacks. The application uses an Application Load Balancer (ALB). A solutions architect must reduce the risk of DDoS attacks against the application. What should the solutions architect do to meet this requirement? A. Add an Amazon Inspector agent to the ALB. B. Configure Amazon Macie to prevent attacks. C. Enable AWS Shield Advanced to prevent attacks. D. Configure Amazon GuardDuty to monitor the ALB. Answer: C ========= Explanation: AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators. [https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html] Question: 184 ============= A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This application should continually process messages without any downtime. Which solution meets these requirements MOST cost-effectively? A. Use Spot Instances exclusively to handle the maximum capacity required. B. Use Reserved Instances exclusively to handle the maximum capacity required. C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity. D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity. Answer: D ========= Explanation: We recommend that you use On-Demand Instances for applications with short-term, irregular workloads that cannot be interrupted. [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html] Question: 185 ============= A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The company\'s security policy requires that all website traffic be inspected by AWS WAR How should the solutions architect comply with these requirements? A. Configure an S3 bucket policy lo accept requests coming from the AWS WAF Amazon Resource Name (ARN) only. B. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin. C. Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront. D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the distribution. Answer: D ========= Explanation: [https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-contentrestricting-access-to-s3.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-webawswaf.html] Question: 186 ============= A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were created. The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 Createlmage API operation is called within the company\'s account. Which solution will meet these requirements with the LEAST operational overhead? A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a Createlmage API call is detected. B. Configure AWS CloudTrail with an Amazon Simple Notification Service {Amazon SNS) notification that occurs when updated logs are sent to Amazon S3. Use Amazon Athena to create a new table and to query on Createlmage when an API call is detected. C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the Createlmage API call. Configure the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a Createlmage API call is detected. D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail logs. Create an AWS Lambda function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a Createlmage API call is detected. Answer: C ========= Explanation: [https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-amievents.html\#:\~:text=For%20example%2C%20you%20can%20create%20an%20EventBridge%20rule% 20that%20detects%20when%20the%20AMI%20creation%20process%20has%20completed%20and% 20then%20invokes%20an%20Amazon%20SNS%20topic%20to%20send%20an%20email%20notificati on%20to%20you]. Creating an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call and configuring the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected will meet the requirements with the least operational overhead. Amazon EventBridge is a serverless event bus that makes it easy to connect applications together using data from your own applications, integrated Software as a Service (SaaS) applications, and AWS services. By creating an EventBridge rule for the CreateImage API call, the company can set up alerts whenever this operation is called within their account. The alert can be sent to an SNS topic, which can then be configured to send notifications to the company\'s email or other desired destination. Question: 187 ============= An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS. The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead. Which solution will meet these requirements? A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access. B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access. C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register (he S3 bucket in Lake Formation. Use Lake Formation access controls to limit access. D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access. Answer: C ========= Explanation: To make all the data available to various teams and minimize operational overhead, the company can create a data lake by using AWS Lake Formation. This will allow the company to centralize all the data in one place and use fine-grained access controls to manage access to the data. To meet the requirements of the company, the solutions architect can create a data lake by using AWS Lake Formation, create an AWS Glue JDBC connection to Amazon RDS, and register the S3 bucket in Lake Formation. The solutions architect can then use Lake Formation access controls to limit access to the data. This solution will provide the ability to manage fine-grained permissions for the data and minimize operational overhead. Question: 188 ============= A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate microservice for processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes Amazon DynamoDB to store user requests before dispatching them to the processing microservices. The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is losing user requests. What should a solutions architect do to address this issue without impacting existing users? A. Add throttling on the API Gateway with server-side throttling limits. B. Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB. C. Create a secondary index in DynamoDB for the table with the user requests. D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB. Answer: D ========= Explanation: By using an SQS queue and Lambda, the solutions architect can decouple the API front end from the processing microservices and improve the overall scalability and availability of the system. The SQS queue acts as a buffer, allowing the API front end to continue accepting user requests even if the processing microservices are experiencing high workloads or are temporarily unavailable. The Lambda function can then retrieve requests from the SQS queue and write them to DynamoDB, ensuring that all user requests are stored and processed. This approach allows the company to scale the processing microservices independently from the API front end, ensuring that the API remains available to users even during periods of high demand. Question: 189 ============= A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company must ensure that no API calls and no data are routed through public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket. Which solution will meet these requirements? A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Attach a resource policy to the S3 bucket to only allow the EC2 instance\'s IAM role for access. B. Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the EC2 instance is located. Attach appropriate security groups to the endpoint. Attach a resource policy lo the S3 bucket to only allow the EC2 instance\'s IAM role for access. C. Run the nslookup tool from inside the EC2 instance to obtain the private IP address of the S3 bucket\'s service API endpoint. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance\'s IAM role for access. D. Use the AWS provided, publicly available ip-ranges.json tile to obtain the private IP address of the S3 bucket\'s service API endpoint. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance\'s IAM role for access. Answer: A ========= Explanation: ([https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specificiam-role/]) Question: 190 ============= A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that are stored in Amazon S3. This content is the same for all users. The application has increased in popularity, and millions of users worldwide are accessing these media files. The company wants to provide the files to the users while reducing the load on the origin. Which solution meets these requirements MOST cost-effectively? A. Deploy an AWS Global Accelerator accelerator in front of the web servers. B. Deploy an Amazon CloudFront web distribution in front of the S3 bucket. C. Deploy an Amazon ElastiCache for Redis instance in front of the web servers. D. Deploy an Amazon ElastiCache for Memcached instance in front of the web servers. Answer: B ========= Explanation: ElastiCache, enhances the performance of web applications by quickly retrieving information from fully-managed in-memory data stores. It utilizes Memcached and Redis, and manages to considerably reduce the time your applications would, otherwise, take to read data from disk-based databases. Amazon CloudFront supports dynamic content from HTTP and WebSocket protocols, which are based on the Transmission Control Protocol (TCP) protocol. Common use cases include dynamic API calls, web pages and web applications, as well as an application\'s static files such as audio and images. It also supports on-demand media streaming over HTTP. AWS Global Accelerator supports both User Datagram Protocol (UDP) and TCP-based protocols. It is commonly used for nonHTTP use cases, such as gaming, IoT and voice over IP. It is also good for HTTP use cases that need static IP addresses or fast regional failover Question: 191 ============= A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive the messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The sender application can send about 1.000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages. Which solution meets these requirements and is the MOST operationally efficient? A. Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively. B. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the Kinesis Client Library (KCL). C. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process. D. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic. Answer: C ========= Explanation: [https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-withamazon-sqs-and-amazon-sns/] [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-deadletter-queues.html] Question: 192 ============= A company has an AWS account used for software engineering. The AWS account has access to the company\'s on-premises data center through a pair of AWS Direct Connect connections. All non-VPC traffic routes to the virtual private gateway. A development team recently created an AWS Lambda function through the console. The development team needs to allow the function to access a database that runs in a private subnet in the company\'s data center. Which solution will meet these requirements? A. Configure the Lambda function to run in the VPC with the appropriate security group. B. Set up a VPN connection from AWS to the data center. Route the traffic from the Lambda function through the VPN. C. Update the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect. D. Create an Elastic IP address. Configure the Lambda function to send traffic through the Elastic IP address without an elastic network interface. Answer: A ========= Explanation: [https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html\#vpc-managing-eni] Question: 193 ============= A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results does not matter. The application uses a monolithic architecture. The only way that the company can scale the application to meet increased demand is to increase the size of the instances. The company\'s developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service (Amazon ECS). What should a solutions architect recommend for communication between the microservices? A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the data consumers to process data from the queue. B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic. Add code to the data consumers to subscribe to the topic. C. Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add code to the data consumers to receive a data object that is passed from the Lambda function. D. Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data. Answer: A ========= Explanation: Queue has Limited throughput (300 msg/s without batching, 3000 msg/s with batching whereby upto 10 msg per batch operation; Msg duplicates not allowed in the queue (exactly-once delivery); Msg order is preserved (FIFO); Queue name must end with.fifo Question: 194 ============= A hospital wants to create digital copies for its large collection of historical written records. The hospital will continue to add hundreds of new documents each day. The hospital\'s data team will scan the documents and will upload the documents to the AWS Cloud. A solutions architect must implement a solution to analyze the documents, extract the medical information, and store the documents so that an application can run SQL queries on the dat a\. The solution must maximize scalability and operational efficiency. Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.) A. Write the document information to an Amazon EC2 instance that runs a MySQL database. B. Write the document information to an Amazon S3 bucket. Use Amazon Athena to query the data. C. Create an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the medical information. D. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Rekognition to convert the documents to raw text. Use Amazon Transcribe Medical to detect and extract relevant medical information from the text. E. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Textract to convert the documents to raw text. Use Amazon Comprehend Medical to detect and extract relevant medical information from the text. Answer: B, E ============ Explanation: This solution meets the requirements of creating digital copies for a large collection of historical written records, analyzing the documents, extracting the medical information, and storing the documents so that an application can run SQL queries on the data. Writing the document information to an Amazon S3 bucket can provide scalable and durable storage for the scanned files. Using Amazon Athena to query the data can provide serverless and interactive SQL analysis on data stored in S3. Creating an AWS Lambda function that runs when new documents are uploaded can provide event-driven and serverless processing of the scanned files. Using Amazon Textract to convert the documents to raw text can provide accurate optical character recognition (OCR) and extraction of structured data such as tables and forms from documents using artificial intelligence (AI). Using Amazon Comprehend Medical to detect and extract relevant medical information from the text can provide natural language processing (NLP) service that uses machine learning that has been pre-trained to understand and extract health data from medical text. Option A is incorrect because writing the document information to an Amazon EC2 instance that runs a MySQL database can increase the infrastructure overhead and complexity, and it may not be able to handle large volumes of data. Option C is incorrect because creating an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the medical information can increase the infrastructure overhead and complexity, and it may not be able to leverage existing AI and NLP services such as Textract and Comprehend Medical. Option D is incorrect because using Amazon Rekognition to convert the documents to raw text can provide image and video analysis, but it does not support OCR or extraction of structured data from documents. Using Amazon Transcribe Medical to detect and extract relevant medical information from the text can provide speech-to-text transcription service for medical conversations, but it does not support text analysis or extraction of health data from medical text. Reference: https://aws.amazon.com/s3/ https://aws.amazon.com/athena/ https://aws.amazon.com/lambda/ https://aws.amazon.com/textract/ https://aws.amazon.com/comprehend/medical/ Question: 195 ============= A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience. Which service will improve the performance of both the real-lime and on-demand streaming? A. Amazon CloudFront B. AWS Global Accelerator C. Amazon Route 53 D. Amazon S3 Transfer Acceleration Answer: A ========= Explanation: You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin. One way you can set up video workflows in the cloud is by using CloudFront together with AWS Media Services. [https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ondemand-streaming-video.html] Question: 196 ============= A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that minimizes data loss and stores every transaction on at least two nodes. Which solution meets these requirements? A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones. B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data. C. Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data. D. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to an Amazon RDS MySQL DB instance. Answer: B ========= Explanation: Q: What does Amazon RDS manage on my behalf? Amazon RDS manages the work involved in setting up a relational database: from provisioning the infrastructure capacity you request to installing the database software. Once your database is up and running, Amazon RDS automates common administrative tasks such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover. [https://aws.amazon.com/rds/faqs/] Question: 197 ============= An ecommerce company hosts its analytics application in the AWS Cloud. The application generates about 300 MB of data each month. The data is stored in JSON format. The company is evaluating a disaster recovery solution to back up the dat a\. The data must be accessible in milliseconds if it is needed, and the data must be kept for 30 days. Which solution meets these requirements MOST cost-effectively? A. Amazon OpenSearch Service (Amazon Elasticsearch Service) B. Amazon S3 Glacier C. Amazon S3 Standard D. Amazon RDS for PostgreSQL Answer: C ========= Explanation: This solution meets the requirements of a disaster recovery solution to back up the data that is generated by an analytics application, stored in JSON format, and must be accessible in milliseconds if it is needed. Amazon S3 Standard is a durable and scalable storage class for frequently accessed data. It can store any amount of data and provide high availability and performance. It can also support millisecond access time for data retrieval. Option A is incorrect because Amazon OpenSearch Service (Amazon Elasticsearch Service) is a search and analytics service that can index and query data, but it is not a backup solution for data stored in JSON format. Option B is incorrect because Amazon S3 Glacier is a low-cost storage class for data archiving and long-term backup, but it does not support millisecond access time for data retrieval. Option D is incorrect because Amazon RDS for PostgreSQL is a relational database service that can store and query structured data, but it is not a backup solution for data stored in JSON format. Reference: https://aws.amazon.com/s3/storage-classes/ https://aws.amazon.com/s3/faqs/\#Durability\_and\_data\_protection Question: 198 ============= A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zones. What should a solutions architect do to meet this requirement? A. Configure AWS Storage Gateway in volume gateway mode. Mount the volume to each Windows instance. B. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance. C. Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance. D. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance. Answer: B ========= Explanation: This solution meets the requirement of migrating a Windows-based application that requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zones. Amazon FSx for Windows File Server provides fully managed shared storage built on Windows Server, and delivers a wide range of data access, data management, and administrative capabilities. It supports the Server Message Block (SMB) protocol and can be mounted to EC2 Windows instances across multiple Availability Zones. Option A is incorrect because AWS Storage Gateway in volume gateway mode provides cloud-backed storage volumes that can be mounted as iSCSI devices from on-premises application servers, but it does not support SMB protocol or EC2 Windows instances. Option C is incorrect because Amazon Elastic File System (Amazon EFS) provides a scalable and elastic NFS file system for Linux-based workloads, but it does not support SMB protocol or EC2 Windows instances. Option D is incorrect because Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with EC2 instances, but it does not support SMB protocol or attaching multiple instances to the same volume. Reference: https://aws.amazon.com/fsx/windows/ https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-file-shares.html Question: 199 ============= A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should be protected throughout the entire application stack, and access to the information should be restricted to certain applications. Which action should the solutions architect take? A. Configure a CloudFront signed URL. B. Configure a CloudFront signed cookie. C. Configure a CloudFront field-level encryption profile. D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy. Answer: C ========= Explanation: [https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-levelencryption.html] \"With Amazon CloudFront, you can enforce secure end-to-end connections to origin servers by using HTTPS. Field-level encryption adds an additional layer of security that lets you protect specific data throughout system processing so that only certain applications can see it.\" Question: 200 ============= A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is stored in the S3 bucket. Additionally, the encryption key must be automatically rotated every year. Which solution will meet these requirements with the LEAST operational overhead? A. Move the data to the S3 bucket. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys. B. Create an AWS Key Management Service {AWS KMS) customer managed key. Enable automatic key rotation. Set the S3 bucket\'s default encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket. C. Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket\'s default encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket. Manually rotate the KMS key every year. D. Encrypt the data with customer key material before moving the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS) key without key material. Import the customer key material into the KMS key. Enable automatic key rotation. Answer: B ========= Explanation: SSE-S3 - is free and uses AWS owned CMKs (CMK = Customer Master Key). The encryption key is owned and managed by AWS, and is shared among many accounts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly defined. SSE-KMS - has two flavors: AWS managed CMK. This is free CMK generated only for your account. You can only view it policies and audit usage, but not manage it. Rotation is automatic - once per 1095 days (3 years), Customer managed CMK. This uses your own key that you create and can manage. Rotation is not enabled by default. But if you enable it, it will be automatically rotated every 1 year. This variant can also use an imported key material by you. If you create such key with an imported material, there is no automated rotation. Only manual rotation. SSE-C - customer provided key. The encryption key is fully managed by you outside of AWS. AWS will not rotate it. This solution meets the requirements of moving data to an Amazon S3 bucket, encrypting the data when it is stored in the S3 bucket, and automatically rotating the encryption key every year with the least operational overhead. AWS Key Management Service (AWS KMS) is a service that enables you to create and manage encryption keys for your data. A customer managed key is a symmetric encryption key that you create and manage in AWS KMS. You can enable automatic key rotation for a customer managed key, which means that AWS KMS generates new cryptographic material for the key every year. You can set the S3 bucket's default encryption behavior to use the customer managed KMS key, which means that any object that is uploaded to the bucket without specifying an encryption method will be encrypted with that key. Option A is incorrect because using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) does not allow you to control or manage the encryption keys. SSE-S3 uses a unique key for each object, and encrypts that key with a master key that is regularly rotated by S3. However, you cannot enable or disable key rotation for SSE-S3 keys, or specify the rotation interval. Option C is incorrect because manually rotating the KMS key every year can increase the operational overhead and complexity, and it may not meet the requirement of rotating the key every year if you forget or delay the rotation process. Option D is incorrect because encrypting the data with customer key material before moving the data to the S3 bucket can increase the operational overhead and complexity, and it may not provide consistent encryption for all objects in the bucket. Creating a KMS key without key material and importing the customer key material into the KMS key can enable you to use your own source of random bits to generate your KMS keys, but it does not support automatic key rotation. Reference: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html Topic 3, Exam Pool C Question: 201 ============= An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table. What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network? A. Use a VPC endpoint for DynamoDB. B. Use a NAT gateway in a public subnet. C. Use a NAT instance in a private subnet. D. Use the internet gateway attached to the VPC. Answer: A ========= Explanation: [https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpointsdynamodb.html] A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2 instances do not require public IP addresses, and you don\'t need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint policies to control access to DynamoDB. Traffic between your VPC and the AWS service does not leave the Amazon network. Question: 202 ============= A company provides an API to its users that automates inquiries for tax computations based on item prices. The company experiences a larger number of inquiries during the holiday season only that cause slower response times. A solutions architect needs to design a solution that is scalable and elastic. What should the solutions architect do to accomplish this? A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made. B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations. C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received item names. D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations. Answer: B ========= Explanation: Lambda server-less is scalable and elastic than EC2 api gateway solution Question: 203 ============= A company wants to use high performance computing (HPC) infrastructure on AWS for financial risk modeling. The company\'s HPC workloads run on Linux. Each HPC workflow runs on hundreds of Amazon EC2 Spot Instances, is shorl-lived, and generates thousands of output files that are ultimately stored in persistent storage for analytics and long-term future use. The company seeks a cloud storage solution that permits the copying of on-premises data to longterm persistent storage to make data available for processing by all EC2 instances. The solution should also be a high performance file system that is integrated with persistent storage to read and write datasets and output files. Which combination of AWS services meets these requirements? A. Amazon FSx for Lustre integrated with Amazon S3 B. Amazon FSx for Windows File Server integrated with Amazon S3 C. Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS) D. Amazon S3 bucket with a VPC endpoint integrated with an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume Answer: A ========= Explanation: [https://aws.amazon.com/fsx/lustre/] Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Many workloads such as machine learning, high performance computing (HPC), video rendering, and financial simulations depend on compute instances accessing the same set of data through high-performance shared storage. Question: 204 ============= A company is running a publicly accessible serverless application that uses Amazon API Gateway and AWS Lambd a\. The application\'s traffic recently spiked due to fraudulent requests from botnets. Which steps should a solutions architect take to block requests from unauthorized users? (Select TWO.) A. Create a usage plan with an API key that is shared with genuine users only. B. Integrate logic within the Lambda function to ignore the requests from fraudulent IP addresses. C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out. D. Convert the existing public API to a private API. Update the DNS records to redirect users to the new API endpoint. E. Create an IAM role for each user attempting to access the API. A user will assume the role when making the API call. Answer: A, C ============ Explanation: [https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usageplans.html\#:\~:text=Don%27t%20rely%20on%20API%20keys%20as%20your%20only%20means%20of] [%20authentication%20and%20authorization%20for%20your%20APIs] [https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html] Question: 205 ============= A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2 On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session data management. The company is willing to make changes to code if needed. What should the solutions architect do to ensure that the architecture supports distributed session data management? A. Use Amazon ElastiCache to manage and store session data. B. Use session affinity (sticky sessions) of the ALB to manage session data. C. Use Session Manager from AWS Systems Manager to manage the session. D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session Answer: A ========= Explanation: [https://aws.amazon.com/vi/caching/session-management/] In order to address scalability and to provide a shared data storage for sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached. ElastiCache offerings for In-Memory key/value stores include ElastiCache for Redis, which can support replication, and ElastiCache for Memcached which does not support replication. Question: 206 ============= A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An administrator updates the website content infrequently and uses an SFTP client to upload new documents. The company decides to host its website on AWS and to use Amazon CloudFront. The company\'s solutions architect creates a CloudFront distribution. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin. Which solution will meet these requirements? A. Create a virtual server by using Amazon Lightsail. Configure the web server in the Lightsail instance. Upload website content by using an SFTP client. B. Create an AWS Auto Scaling group for Amazon EC2 instances. Use an Application Load Balancer. Upload website content by using an SFTP client. C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website content by using theAWSCLI. D. Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website content by using the SFTP client. Answer: C ========= Explanation: [https://docs.aws.amazon.com/cli/latest/reference/transfer/describe-server.html] Question: 207 ============= A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible. Which solutions meet these requirements? (Select TWO.) A. Create an Amazon RDS DB instance in Multi-AZ mode. B. Create an Amazon RDS DB instance and one or more replicas in another Availability Zone. C. Create an Amazon EC2 in stance-based Docker cluster to handle the dynamic application load. D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load. E. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load. Answer: A, D ============ Explanation: [https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html] 1. Relational database: RDS 2. Container-based applications: ECS \"Amazon ECS enables you to launch and stop your container-based applications by using simple API calls. You can also retrieve the state of your cluster from a centralized service and have access to many familiar Amazon EC2 features.\" 3. Little manual intervention: Fargate You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of Amazon EC2 instances that you manage. Question: 208 ============= A company is designing a cloud communications platform that is driven by APIs. The application is hosted on Amazon EC2 instances behind a Network Load Balancer (NLB). The company uses Amazon API Gateway to provide external users with access to the application through APIs. The company wants to protect the platform against web exploits like SQL injection and also wants to detect and mitigate large, sophisticated DDoS attacks. Which combination of solutions provides the MOST protection? (Select TWO.) A. Use AWS WAF to protect the NLB. B. Use AWS Shield Advanced with the NLB. C. Use AWS WAF to protect Amazon API Gateway. D. Use Amazon GuardDuty with AWS Shield Standard. E. Use AWS Shield Standard with Amazon API Gateway. Answer: BC ========== Explanation: AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators. AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources. You can protect the following resource types: Amazon CloudFront distribution Amazon API Gateway REST API Application Load Balancer AWS AppSync GraphQL API Amazon Cognito user pool [https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html] Question: 209 ============= A company is running a batch application on Amazon EC2 instances. The application consists of a backend with multiple Amazon RDS databases. The application is causing a high number of leads on the databases. A solutions architect must reduce the number of database reads while ensuring high availability. What should the solutions architect do to meet this requirement? A. Add Amazon RDS read replicas B. Use Amazon ElastCache for Redis C. Use Amazon Route 53 DNS caching D. Use Amazon ElastiCache for Memcached Answer: A ========= Explanation: This solution meets the requirement of reducing the number of database reads while ensuring high availability for a batch application that consists of a backend with multiple Amazon RDS databases. Amazon RDS read replicas are copies of the primary database instance that can serve read-only traffic. You can create one or more read replicas for a primary database instance and connect to them using a special endpoint. Read replicas can improve the performance and availability of your application by offloading read queries from the primary database instance. Option B is incorrect because using Amazon ElastiCache for Redis can provide a fast, in-memory data store that can cache frequently accessed data, but it does not support replication from Amazon RDS databases. Option C is incorrect because using Amazon Route 53 DNS caching can improve the performance and availability of DNS queries, but it does not reduce the number of database reads. Option D is incorrect because using Amazon ElastiCache for Memcached can provide a fast, inmemory data store that can cache frequently accessed data, but it does not support replication from Amazon RDS databases. Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER\_ReadRepl.html Question: 210 ============= A company has a web application that is based on Java and PHP The company plans to move the application from on premises to AWS The company needs the ability to test new site features frequently. The company also needs a highly available and managed solution that requires minimum operational overhead Which solution will meet these requirements? A. Create an Amazon S3 bucket Enable static web hosting on the S3 bucket Upload the static content to the S3 bucket Use AWS Lambda to process all dynamic content B. Deploy the web application to an AWS Elastic Beanstalk environment Use URL swapping to switch between multiple Elastic Beanstalk environments for feature testing C. Deploy the web application lo Amazon EC2 instances that are configured with Java and PHP Use Auto Scaling groups and an Application Load Balancer to manage the website\'s availability D. Containerize the web application Deploy the web application to Amazon EC2 instances Use the AWS Load Balancer Controller to dynamically route traffic between containers thai contain the new site features for testing. Answer: B ========= Explanation: Frequent feature testing - - Multiple Elastic Beanstalk environments can be created easily for development, testing and production use cases. - Traffic can be routed between environments for A/B testing and feature iteration using simple URL swapping techniques. No complex routing rules or infrastructure changes required. Question: 211 ============= A company has a Microsoft NET application that runs on an on-premises Windows Server Trie application stores data by using an Oracle Database Standard Edition server The company is planning a migration to AWS and wants to minimize development changes while moving the application The AWS application environment should be highly available Which combination of actions should the company take to meet these requirements? (Select TWO ) A. Refactor the application as serverless with AWS Lambda functions running NET Cote B. Rehost the application in AWS Elastic Beanstalk with the NET platform in a Multi-AZ deployment C. Replatform the applica