Summary

This document appears to be an AWS Certified Solutions Architect - Associate past paper for 2024/2025 practice test.

Full Transcript

Amazon Web Services Exam SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) Version: 14.0 [ Total Questions: 550 ] Amazon Web Services SAA-C03 : Practice Test Topic break down Topic No. of Questions Topic 1: Exam Pool A 114 Topic 2: Exam Pool B 86 Topic 3: Exam Pool C 157 To...

Amazon Web Services Exam SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) Version: 14.0 [ Total Questions: 550 ] Amazon Web Services SAA-C03 : Practice Test Topic break down Topic No. of Questions Topic 1: Exam Pool A 114 Topic 2: Exam Pool B 86 Topic 3: Exam Pool C 157 Topic 4: Exam Pool D 193 2 Amazon Web Services SAA-C03 : Practice Test Topic 1, Exam Pool A Question No : 1 - (Topic 1) A company is implementing a shared storage solution for a media application that is hosted m the AWS Cloud The company needs the ability to use SMB clients to access data The solution must he fully managed. Which AWS solution meets these requirements? A. Create an AWS Storage Gateway volume gateway. Create a file share that uses the required client protocol Connect the application server to the file share. B. Create an AWS Storage Gateway tape gateway Configure (apes to use Amazon S3 Connect the application server lo the tape gateway C. Create an Amazon EC2 Windows instance Install and configure a Windows file share role on the instance. Connect the application server to the file share. D. Create an Amazon FSx for Windows File Server tile system Attach the fie system to the origin server. Connect the application server to the file system Answer: D Explanation: https://aws.amazon.com/fsx/lustre/ Amazon FSx has native support for Windows file system features and for the industrystandard Server Message Block (SMB) protocol to access file storage over a network. https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html Question No : 2 - (Topic 1) A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations. Which solution meets these requirements with the LEAST amount of operational overhead? A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID 3 Amazon Web Services SAA-C03 : Practice Test to the S3 bucket policy. B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy. C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly. D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy. Answer: A Explanation: https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-awsorganization-of-iam-principals/ The aws:PrincipalOrgID global key provides an alternative to listing all the account IDs for all AWS accounts in an organization. For example, the following Amazon S3 bucket policy allows members of any account in the XXX organization to add an object into the examtopics bucket. {"Version": "2020-09-10", "Statement": { "Sid": "AllowPutObject", "Effect": "Allow", "Principal": "*", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::examtopics/*", "Condition": {"StringEquals": {"aws:PrincipalOrgID":["XXX"]}}}} https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_conditionkeys.html Question No : 3 - (Topic 1) A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes. 4 Amazon Web Services SAA-C03 : Practice Test What should a solutions architect do to accomplish this goal? A. Turn on AWS Config with the appropriate rules. B. Turn on AWS Trusted Advisor with the appropriate checks. C. Turn on Amazon Inspector with the appropriate assessment template. D. Turn on Amazon S3 server access logging. Configure Amazon EventBridge (Amazon Cloud Watch Events). Answer: A Explanation: To ensure that Amazon S3 buckets do not have unauthorized configuration changes, a solutions architect should turn on AWS Config with the appropriate rules. AWS Config is a service that allows users to audit and assess their AWS resource configurations for compliance with industry standards and internal policies. It provides a detailed view of the resources and their configurations, including information on how the resources are related to each other. By turning on AWS Config with the appropriate rules, users can identify and remediate unauthorized configuration changes to their Amazon S3 buckets. Question No : 4 - (Topic 1) A company is storing sensitive user information in an Amazon S3 bucket The company wants to provide secure access to this bucket from the application tier running on Ama2on EC2 instances inside a VPC. Which combination of steps should a solutions architect take to accomplish this? (Select TWO.) A. Configure a VPC gateway endpoint for Amazon S3 within the VPC B. Create a bucket policy to make the objects to the S3 bucket public C. Create a bucket policy that limits access to only the application tier running in the VPC D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket Answer: A,C Explanation: https://aws.amazon.com/premiumsupport/knowledge-center/s3-privateconnection-no-authentication/ 5 Amazon Web Services SAA-C03 : Practice Test Question No : 5 - (Topic 1) A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company’s product manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solution architect must provide access to the product manager by following the principle of least privilege. Which solution will meet these requirements? A. Share the dashboard from the CloudWatch console. Enter the product manager’s email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager. B. Create an IAM user specifically for the product manager. Attach the CloudWatch Read Only Access managed policy to the user. Share the new login credential with the product manager. Share the browser URL of the correct dashboard with the product manager. C. Create an IAM user for the company’s employees, Attach the View Only Access AWS managed policy to the IAM user. Share the new login credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in the Dashboards section. D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP credentials. On the bastion server, ensure that the browser is configured to open the dashboard URL with cached AWS credentials that have appropriate permissions to view the dashboard. Answer: B Explanation: To provide the product manager access to the Amazon CloudWatch dashboard while following the principle of least privilege, a solution architect should create an IAM user specifically for the product manager and attach the CloudWatch Read Only Access managed policy to the user. This policy allows the user to view the dashboard without being able to make any changes to it. The solution architect should then share the new login credential with the product manager and provide them with the browser URL of the correct dashboard. 6 Amazon Web Services SAA-C03 : Practice Test Question No : 6 - (Topic 1) A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week. What should the company do to guarantee the EC2 capacity? A. Purchase Reserved instances that specify the Region needed B. Create an On Demand Capacity Reservation that specifies the Region needed C. Purchase Reserved instances that specify the Region and three Availability Zones needed D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed Answer: D Explanation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacityreservations.html Reserve instances: You will have to pay for the whole term (1 year or 3years) which is not cost effective Question No : 7 - (Topic 1) A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis. The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Ad ditionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days. What is the MOST operationally efficient solution that meets these requirements? A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days B. Launch Amazon EC2 instances across two Availability Zones and place them behind an 7 Amazon Web Services SAA-C03 : Practice Test Elastic Load Balancer to ingest the alerts Create a script on the EC2 instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14 days D. Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue Answer: A Explanation: https://aws.amazon.com/kinesis/datafirehose/features/?nc=sn&loc=2#:~:text=into%20Amazon%20S3%2C%20Amazon%20Red shift%2C%20Amazon%20OpenSearch%20Service%2C%20Kinesis,Delivery%20streams Question No : 8 - (Topic 1) A company collects temperature, humidity, and atmospheric pressure data in cities across multiple continents. The average volume of data collected per site each day is 500 GB. Each site has a high-speed internet connection. The company's weather forecasting applications are based in a single Region and analyze the data daily. What is the FASTEST way to aggregate data from all of these global sites? A. Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket. B. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 crossRegion replication to copy objects to the destination bucket. C. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. D. Upload the data to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Once a day take an EBS snapshot and copy it to the centralized Region. Restore the EBS volume in the centralized Region 8 Amazon Web Services SAA-C03 : Practice Test and run an analysis on the data daily. Answer: A Explanation: You might want to use Transfer Acceleration on a bucket for various reasons, including the following: You have customers that upload to a centralized bucket from all over the world. You transfer gigabytes to terabytes of data on a regular basis across continents. You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3. https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html https://aws.amazon.com/s3/transferacceleration/#:~:text=S3%20Transfer%20Acceleration%20(S3TA)%20reduces,to%20S3% 20for%20remote%20applications: "Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet" https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html "Improved throughput - You can upload parts in parallel to improve throughput." Question No : 9 - (Topic 1) A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve 9 Amazon Web Services SAA-C03 : Practice Test the performance as much as possible. Which solution will meet these requirements with the LEAST operational overhead? A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule's target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target. D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. Answer: B Explanation: Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift, in just a few clicks. https://aws.amazon.com/appflow/ Question No : 10 - (Topic 1) A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that run in an Auto Scaling group. The company has deployments across multiple AWS Regions. The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions. Which solution will meet these requirements? 10 Amazon Web Services SAA-C03 : Practice Test A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the NLB as an AWS Global Accelerator endpoint in each Region. B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Use the ALB as an AWS Global Accelerator endpoint in each Region. C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as an origin. D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 weighted record that points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the weighted record as an origin. Answer: D Explanation: https://aws.amazon.com/global-accelerator/faqs/ HTTP /HTTPS - ALB ; TCP and UDP - NLB; Lowest latency routing and more throughput. Also supports failover, uses Anycast Ip addressing - Global Accelerator Caching at Egde Locations – Cloutfront WS Global Accelerator automatically checks the health of your applications and routes user traffic only to healthy application endpoints. If the health status changes or you make configuration updates, AWS Global Accelerator reacts instantaneously to route your users to the next available endpoint.. Question No : 11 - (Topic 1) A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks. 11 Amazon Web Services SAA-C03 : Practice Test Which solution meets these requirements? A. Enable Amazon GuardDuty on the account. B. Enable Amazon Inspector on the EC2 instances. C. Enable AWS Shield and assign Amazon Route 53 to it. D. Enable AWS Shield Advanced and assign the ELB to it. Answer: D Explanation: https://aws.amazon.com/shield/faqs/ Question No : 12 - (Topic 1) A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application. Which solution meets these requirements and is the MOST operationally efficient? A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services. B. Use Amazon CloudWatch metrics to analyze the application performance history to determine the server's peak utilization during the performance failures. Increase the size of the application server's Amazon EC2 instances to meet the peak requirements. C. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required. D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected. Answer: A Explanation: https://aws.amazon.com/getting-started/hands-on/build-serverless-web-applambda-apigateway-s3-dynamodb-cognito/module-4/ Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, AWS 12 Amazon Web Services SAA-C03 : Practice Test Amplify, Amazon DynamoDB, and Amazon Cognito. This example showed similar setup as question: Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, AWS Amplify, Amazon DynamoDB, and Amazon Cognito Question No : 13 - (Topic 1) A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days the files are rarely accessed. The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues. Which solution will meet these requirements? A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS. B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. C. Create an Amazon FSx for Windows File Server file system to extend the company's storage space. D. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days. Answer: B Explanation: Amazon S3 File Gateway is a hybrid cloud storage service that enables onpremises applications to seamlessly use Amazon S3 cloud storage. It provides a file interface to Amazon S3 and supports SMB and NFS protocols. It also supports S3 Lifecycle policies that can automatically transition data from S3 Standard to S3 Glacier Deep Archive after a specified period of time. This solution will meet the requirements of increasing the company’s available storage space without losing low-latency access to the most recently accessed files and providing file lifecycle management to avoid future storage issues. Reference: https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.ht ml 13 Amazon Web Services SAA-C03 : Practice Test Question No : 14 - (Topic 1) A company wants to migrate an on-premises data center to AWS. The data canter hosts an SFTP server that stores its data on an NFS-based file system. The server holds 200 GB of data that needs to be transferred. The server must be hosted on an Amazon EC2 instance that uses an Amazon Elastic File System (Amazon EFS) file system When combination of steps should a solutions architect take to automate this task? (Select TWO ) A. Launch the EC2 instance into the same Avalability Zone as the EFS fie system B. install an AWS DataSync agent m the on-premises data center C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance tor the data D. Manually use an operating system copy command to push the data to the EC2 instance E. Use AWS DataSync to create a suitable location configuration for the onprermises SFTP server Answer: B,E Explanation: AWS DataSync is an online data movement and discovery service that simplifies data migration and helps users quickly, easily, and securely move their file or object data to, from, and between AWS storage services1. Users can use AWS DataSync to transfer data between on-premises and AWS storage services. To use AWS DataSync, users need to install an AWS DataSync agent in the on-premises data center. The agent is a software appliance that connects to the source or destination storage system and handles the data transfer to or from AWS over the network2. Users also need to use AWS DataSync to create a suitable location configuration for the on-premises SFTP server. A location is a logical representation of a storage system that contains files or objects that users want to transfer using DataSync. Users can create locations for NFS shares, SMB shares, HDFS file systems, self-managed object storage, Amazon S3 buckets, Amazon EFS file systems, Amazon FSx for Windows File Server file systems, Amazon FSx for Lustre file systems, Amazon FSx for OpenZFS file systems, Amazon FSx for NetApp ONTAP file systems, and AWS Snowcone devices3. 14 Amazon Web Services SAA-C03 : Practice Test Question No : 15 - (Topic 1) A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS. What should a solutions architect do to meet this requirement? A. Update the ALB's network ACL to accept only HTTPS traffic B. Create a rule that replaces the HTTP in the URL with HTTPS. C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS. D. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI). Answer: C Explanation: https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-http-to-httpsusing-alb/ How can I redirect HTTP requests to HTTPS using an Application Load Balancer? Last updated: 2020-10-30 I want to redirect HTTP requests to HTTPS using Application Load Balancer listener rules. How can I do this? Resolution Reference: https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-http-to-httpsusing-alb/ Question No : 16 - (Topic 1) A company is preparing to deploy a new serverless workload. A solutions architect must use the principle of least privilege to configure permissions that will be used to run an AWS Lambda function. An Amazon EventBridge (Amazon CloudWatch Events) rule will invoke the function. 15 Amazon Web Services SAA-C03 : Practice Test Which solution meets these requirements? A. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal. B. Add an execution role to the function with lambda:InvokeFunction as the action and Service:amazonaws.com as the principal. C. Add a resource-based policy to the function with lambda:'* as the action and Service:events.amazonaws.com as the principal. D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal. Answer: D Explanation: https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-basedpolicies-eventbridge.html#lambda-permissions Question No : 17 - (Topic 1) A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials tor its Amazon ROS tor MySQL databases across multiple AWS Regions Which solution will meet these requirements with the LEAST operational overhead? A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions Configure Secrets Manager to rotate the secrets on a schedule B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter Use multi-Region secret replication for the required Regions Configure Systems Manager to rotate the secrets on a schedule C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys Store the secrets in an Amazon DynamoDB global table Use an AWS Lambda function to retrieve the secrets from DynamoDB Use the RDS API to rotate the secrets. Answer: A Explanation: https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-managermultiple-regions/ 16 Amazon Web Services SAA-C03 : Practice Test Question No : 18 - (Topic 1) A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS Organizations. The company's security team needs a single sign-on (SSO) solution across all the company's accounts. The company must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory. Which solution will meet these requirements? A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a oneway forest trust or a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory. B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a twoway forest trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory. C. Use AWS Directory Service. Create a two-way trust relationship with the company's selfmanaged Microsoft Active Directory. D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Answer: A Explanation: To provide single sign-on (SSO) across all the company's accounts while continuing to manage users and groups in its on-premises self-managed Microsoft Active Directory, the solution is to enable AWS Single Sign-On (SSO) from the AWS SSO console and create a one-way forest trust or a one-way domain trust to connect the company's selfmanaged Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory. This solution is described in the AWS documentation Question No : 19 - (Topic 1) 17 Amazon Web Services SAA-C03 : Practice Test A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple processing to transform the data and save the data in JSON format for later analysis. Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days, users will upload a few files or no files. Which solution meets these requirements with the LEAST operational overhead? A. Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an Amazon Aurora DB cluster. B. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB. C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB. Most Voted D. Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in Amazon Aurora DB cluster. Answer: C Explanation: Amazon S3 sends event notifications about S3 buckets (for example, object created, object removed, or object restored) to an SNS topic in the same Region. The SNS topic publishes the event to an SQS queue in the central Region. The SQS queue is configured as the event source for your Lambda function and buffers the event messages for the Lambda function. The Lambda function polls the SQS queue for messages and processes the Amazon S3 event notifications according to your application’s requirements. https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/subscribe-a-lambdafunction-to-event-notifications-from-s3-buckets-in-different-aws-regions.html Question No : 20 - (Topic 1) 18 Amazon Web Services SAA-C03 : Practice Test A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images Which method is the MOST cost-effective for hosting the website? A. Containerize the website and host it in AWS Fargate. B. Create an Amazon S3 bucket and host the website there C. Deploy a web server on an Amazon EC2 instance to host the website. D. Configure an Application Loa d Balancer with an AWS Lambda target that uses the Express js framework. Answer: B Explanation: In Static Websites, Web pages are returned by the server which are prebuilt. They use simple languages such as HTML, CSS, or JavaScript. There is no processing of content on the server (according to the user) in Static Websites. Web pages are returned by the server with no change therefore, static Websites are fast. There is no interaction with databases. Also, they are less costly as the host does not need to support server-side processing with different languages. ============ In Dynamic Websites, Web pages are returned by the server which are processed during runtime means they are not prebuilt web pages but they are built during runtime according to the user’s demand. These use server-side scripting languages such as PHP, Node.js, ASP.NET and many more supported by the server. So, they are slower than static websites but updates and interaction with databases are possible. Question No : 21 - (Topic 1) A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the 19 Amazon Web Services SAA-C03 : Practice Test application to AWS with minimum code changes and minimum development effort. Which solution will meet these requirements with the LEAST operational overhead? A. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute the incoming requests. B. Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming requests C. Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use Amazon API Gateway as an entry point to the Lambda functions. D. Use a high performance computing (HPC) solution such as AWS ParallelClusterto establish an HPC cluster that can process the incoming requests at the appropriate scale. Answer: A Explanation: AWS Fargate is a serverless compute engine that lets users run containers without having to manage servers or clusters of Amazon EC2 instances1. Users can use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Amazon ECS is a fully managed container orchestration service that supports both Docker and Kubernetes2. Service Auto Scaling is a feature that allows users to adjust the desired number of tasks in an ECS service based on CloudWatch metrics, such as CPU utilization or request count3. Users can use AWS Fargate on Amazon ECS to migrate the application to AWS with minimum code changes and minimum development effort, as they only need to package their application in containers and specify the CPU and memory requirements. Users can also use an Application Load Balancer to distribute the incoming requests. An Application Load Balancer is a load balancer that operates at the application layer and routes traffic to targets based on the content of the request. Users can register their ECS tasks as targets for an Application Load Balancer and configure listener rules to route requests to different target groups based on path or host headers. Users can use an Application Load Balancer to improve the availability and performance of their web application. Question No : 22 - (Topic 1) 20 Amazon Web Services SAA-C03 : Practice Test A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion. Which combination of steps should a solutions architect take to meet these requirements? (Choose two.) A. Enable versioning on the S3 bucket. B. Enable MFA Delete on the S3 bucket. C. Create a bucket policy on the S3 bucket. D. Enable default encryption on the S3 bucket. E. Create a lifecycle policy for the objects in the S3 bucket. Answer: A,B Explanation: To protect data in an S3 bucket from accidental deletion, versioning should be enabled, which enables you to preserve, retrieve, and restore every version of every object in an S3 bucket. Additionally, enabling MFA (multi-factor authentication) Delete on the S3 bucket adds an extra layer of protection by requiring an authentication token in addition to the user's access keys to delete objects in the bucket. Reference: AWS S3 Versioning documentation: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html AWS S3 MFA Delete documentation: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html Question No : 23 - (Topic 1) A company hosts an application on multiple Amazon EC2 instances The application processes messages from an Amazon SQS queue writes to an Amazon RDS table and deletes the message from the queue Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages. What should a solutions architect do to ensure messages are being processed once only? A. Use the CreateQueue API call to create a new queue B. Use the Add Permission API call to add appropriate permissions C. Use the ReceiveMessage API call to set an appropriate wail time 21 Amazon Web Services SAA-C03 : Practice Test D. Use the ChangeMessageVisibility APi call to increase the visibility timeout Answer: D Explanation: The visibility timeout begins when Amazon SQS returns a message. During this time, the consumer processes and deletes the message. However, if the consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within the duration of the visibility timeout. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqsvisibility-timeout.html Keyword: SQS queue writes to an Amazon RDS From this, Option D best suite & other Options ruled out [Option A - You can't intruduce one more Queue in the existing one; Option B - only Permission & Option C - Only Retrieves Messages] FIF O queues are designed to never introduce duplicate messages. However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and then resends the same message. Amazon SQS APIs provide deduplication functionality that prevents your message producer from sending duplicates. Any duplicates introduced by the message producer are removed within a 5-minute deduplication interval. For standard queues, you might occasionally receive a duplicate copy of a message (at-least- once delivery). If you use a standard queue, you must design your applications to be idempotent (that is, they must not be affected adversely when processing the same message more than once). Question No : 24 - (Topic 1) A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an Amazon EC2 instance in a public subnet of a VPC A solutions architect needs to connect from the on-premises network, through the company's internet connection to the bastion host and to the application servers The solutions architect must make sure that the security groups of all the EC2 instances will allow that access Which combination of steps should the solutions architect take to meet these requirements? (Select TWO) A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances 22 Amazon Web Services SAA-C03 : Practice Test B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the bastion host Answer: C,D Explanation: https://digitalcloud.training/ssh-into-ec2-in-private-subnet/ Question No : 25 - (Topic 1) A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A solutions architect needs to share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner's AWS account. The AMI is backed by Amazon Elastic Block Store (Amazon EBS) and uses a customer managed customer master key (CMK) to encrypt EBS volume snapshots. What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner's AWS account? A. Make the encrypted AMI and snapshots publicly available. Modify the CMK's key policy to allow the MSP Partner's AWS account to use the key B. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the CMK's key policy to allow the MSP Partner's AWS account to use the key. C. Modify the launchPermission property of the AMI Share the AMI with the MSP Partner's AWS account only. Modify the CMK's key policy to trust a new CMK that is owned by the MSP Partner for encryption. D. Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner's AWS account. Encrypt the S3 bucket with a CMK that is owned by the MSP Partner Copy and launch the AMI in the MSP Partner's AWS account. Answer: B Explanation: Share the existing KMS key with the MSP external account because it has already been used to encrypt the AMI snapshot. https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external23 Amazon Web Services SAA-C03 : Practice Test accounts.html Question No : 26 - (Topic 1) A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application needs to be encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be rotated each year before the certificate expires. What should a solutions architect do to meet these requirements? A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate. B. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Import the key material from the certificate. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate. C. Use AWS Certificate Manager (ACM) Private Certificate Authority to issue an SSL/TLS certificate from the root CA. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate. D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually. Answer: D Explanation: https://www.amazonaws.cn/en/certificatemanager/faqs/#Managed_renewal_and_deployment Question No : 27 - (Topic 1) A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as 24 Amazon Web Services SAA-C03 : Practice Test traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud. Which solution will meet these requirements? A. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC B. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering. C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC. D. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC. Answer: C Explanation: AWS Network Firewall supports both inspection and filtering as required Question No : 28 - (Topic 1) A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2 instances. The file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable storage solution that preserves how users currently access the files. What should a solutions architect do to meet these requirements? A. Migrate all the data to Amazon S3 Set up IAM authentication for users to access files B. Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 Instances. C. Extend the file share environment to Amazon FSx for Windows File Server with a MultiAZ configuration. Migrate all the data to FSx for Windows File Server. D. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS. Answer: C Explanation: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, 25 Amazon Web Services SAA-C03 : Practice Test backed by a fully native Windows file system. https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html Question No : 29 - (Topic 1) A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata. The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base. Which solution meats these requirements? A. Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB. B. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata. C. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata. D. Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata. Answer: C Explanation: https://www.quora.com/How-can-I-use-DynamoDB-for-storing-metadata-forAmazon-S3-objects This solution meets the requirements of scalability, performance, and availability. AWS Lambda can process the photos in parallel and scale up or down automatically depending on the demand. Amazon S3 can store the photos and metadata reliably and durably, and provide high availability and low latency. DynamoDB can store the metadata efficiently and provide consistent performance. This solution also reduces the cost and complexity of managing EC2 instances and EBS volumes. Option A is incorrect because storing the photos in DynamoDB is not a good practice, as it can increase the storage cost and limit the throughput. Option B is incorrect because Kinesis Data Firehose is not designed for processing photos, but for streaming data to destinations such as S3 or Redshift. Option D is incorrect because increasing the number 26 Amazon Web Services SAA-C03 : Practice Test of EC2 instances and using Provisioned IOPS SSD volumes does not guarantee scalability, as it depends on the load balancer and the application code. It also increases the cost and complexity of managing the infrastructure. References: https://aws.amazon.com/certification/certified-solutions-architect-professional/ https://www.examtopics.com/discussions/amazon/view/7193-exam-aws-certifiedsolutions-architect-professional-topic-1/ https://aws.amazon.com/architecture/ Question No : 30 - (Topic 1) A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from around the world. The company must decrease latency for users who access the website. Which solution meets these requirements MOST cost-effectively? A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries. B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators. C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution. D. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint. Answer: C Explanation: Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations around the world, providing low latency and high transfer speeds to users accessing the content. Adding a CloudFront distribution in front of the S3 bucket will cache the static website's content at edge locations around the world, decreasing latency for users accessing the website. This solution is also cost-effective as it only charges for the data transfer and requests made by users accessing the content from the CloudFront edge locations. Additionally, this solution provides scalability and reliability benefits as CloudFront can automatically scale to handle increased demand and provide high availability for the website. 27 Amazon Web Services SAA-C03 : Practice Test Question No : 31 - (Topic 1) An Amazon EC2 administrator created the following policy associated with an IAM group containing several users What is the effect of this policy? A. Users can terminate an EC2 instance in any AWS Region except us-east-1. B. Users can terminate an EC2 instance with the IP address 10 100 100 1 in the us-east-1 Region C. Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254. 28 Amazon Web Services SAA-C03 : Practice Test D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100 100 254 Answer: C Explanation: as the policy prevents anyone from doing any EC2 action on any region except us-east-1 and allows only users with source ip 10.100.100.0/24 to terminate instances. So user with source ip 10.100.100.254 can terminate instances in us-east-1 region. Question No : 32 - (Topic 1) A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The software that accesses this data requires consistently high I/O performance. A solutions architect needs to minimize the time that is required to clone the production data into the test environment. Which solution will meet these requirements? A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment. B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment. C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots. D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment. Answer: C Explanation: To clone the production data into the test environment with high I/O performance and without affecting the production environment, the best option is to take EBS snapshots of the production EBS volumes and restore them onto new EBS volumes in the test 29 Amazon Web Services SAA-C03 : Practice Test environment. Then, attach the new EBS volumes to EC2 instances in the test environment. This option minimizes the time required to clone the data and ensures that modifications to the cloned data do not affect the production environment. Therefore, option C is the correct answer. Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoringvolume.html Question No : 33 - (Topic 1) A company runs its Infrastructure on AWS and has a registered base of 700.000 users for res document management application The company intends to create a product that converts large pdf files to jpg Imago files. The .pdf files average 5 MB in size. The company needs to store the original files and the converted files. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over lime. Which solution meets these requirements MOST cost-effectively? A. Save the pdf files to Amazon S3 Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to jpg format and store them back in Amazon S3 B. Save the pdf files to Amazon DynamoDB. Use the DynamoDB Streams feature to invoke an AWS Lambda function to convert the files to jpg format and store them hack in DynamoDB C. Upload the pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances. Amazon Elastic Block Store (Amazon EBS) storage and an Auto Scaling group. Use a program In the EC2 instances to convert the files to jpg format Save the .pdf files and the .jpg files In the EBS store. D. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon EPS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the file to jpg format Save the pdf files and the jpg files in the EBS store. Answer: A Explanation: Elastic BeanStalk is expensive, and DocumentDB has a 400KB max to upload files. So Lambda and S3 should be the one. 30 Amazon Web Services SAA-C03 : Practice Test Question No : 34 - (Topic 1) A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet. Which solution will meet these requirements? A. Configure an S3 interface endpoint. B. Configure an S3 gateway endpoint. C. Create an S3 bucket in a private subnet. D. Create an S3 bucket in the same Region as the EC2 instance. Answer: B Explanation: https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelinkinterface-endpoints.html#types-of-vpc-endpoints-for-s3 https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html Question No : 35 - (Topic 1) A company has an application that ingests incoming messages. These messages are then quickly consumed by dozens of other applications and microservices. The number of messages varies drastically and sometimes spikes as high as 100,000 each second. The company wants to decouple the solution and increase scalability. Which solution meets these requirements? A. Persist the messages to Amazon Kinesis Data Analytics. All the applications will read and process the messages. B. Deploy the application on Amazon EC2 instances in an Auto Scaling group, which scales the number of EC2 instances based on CPU metrics. C. Write the messages to Amazon Kinesis Data Streams with a single shard. All applications will read from the stream and process the messages. D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All 31 Amazon Web Services SAA-C03 : Practice Test applications then process the messages from the queues. Answer: D Explanation: https://aws.amazon.com/sqs/features/ By routing incoming requests to Amazon SQS, the company can decouple the job requests from the processing instances. This allows them to scale the number of instances based on the size of the queue, providing more resources when needed. Additionally, using an Auto Scaling group based on the queue size will automatically scale the number of instances up or down depending on the workload. Updating the software to read from the queue will allow it to process the job requests in a more efficient manner, improving the performance of the system. Question No : 36 - (Topic 1) A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is streamed, and a storage solution for the data. Which solution will meet these requirements with the LEAST operational overhead? A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3. B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3. C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3. D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3. Answer: C Question No : 37 - (Topic 1) 32 Amazon Web Services SAA-C03 : Practice Test A company wants to run its critical applications in containers to meet requirements tor scalability and availability The company prefers to focus on maintenance of the critical applications The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized workload What should a solutions architect do to meet those requirements? A. Use Amazon EC2 Instances, and Install Docker on the Instances B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate D. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)op6mized Amazon Machine Image (AMI). Answer: C Explanation: using AWS ECS on AWS Fargate since they requirements are for scalability and availability without having to provision and manage the underlying infrastructure to run the containerized workload. https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html Question No : 38 - (Topic 1) A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check. What should a solutions architect do to accomplish this? A. Use AWS Config rules to define and detect resources that are not properly tagged. B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually. C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance. D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWa

Use Quizgecko on...
Browser
Browser