SAA-C03V19.0.pdf
Document Details
Uploaded by FavoredBromeliad
AWS
Tags
Related
- AWS Certified Solutions Architect Official Study Guide_ Associate Exam.pdf
- AWS Certified Solutions Architect Official Study Guide_ Associate Exam ch 2.pdf
- AWS Certified Solutions Architect Associate (SAA-C02) Study Guide PDF
- AWS Certified Solutions Architect Associate SAA-C03 Exam PDF
- Introduction to Cloud Computing AWS Quiz PDF
- AWS Certified Solutions Architect Associate Slides PDF
Full Transcript
0 Exam SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) Version: 19.0 [ Total Questions: 677 ] Certify For Sure with IT Exam Dumps Topic 1, Exam Pool A 1. - (Topic 1) A comp...
0 Exam SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) Version: 19.0 [ Total Questions: 677 ] Certify For Sure with IT Exam Dumps Topic 1, Exam Pool A 1. - (Topic 1) A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS with minimum code changes and minimum development effort. Which solution will meet these requirements with the LEAST operational overhead? A. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute the incoming requests. B. Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming requests C. Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use Amazon API Gateway as an entry point to the Lambda functions. D. Use a high performance computing (HPC) solution such as AWS ParallelClusterto establish an HPC cluster that can process the incoming requests at the appropriate scale. Answer: A Explanation: AWS Fargate is a serverless compute engine that lets users run containers without having to manage servers or clusters of Amazon EC2 instances1. Users can use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Amazon ECS is a fully managed container orchestration service that supports both Docker and Kubernetes2. Service Auto Scaling is a feature that allows users to adjust the desired number of tasks in an ECS service based on CloudWatch metrics, such as CPU utilization or request count3. Users can use AWS Fargate on Amazon ECS to migrate the application to AWS with minimum code changes and minimum development effort, as they only need to package their application in containers and specify the CPU and memory requirements. Users can also use an Application Load Balancer to distribute the incoming requests. An Application Load Balancer is a load balancer that operates at the application layer and routes traffic to targets based on the content of the request. Users can register their ECS tasks as targets for an Application Load Balancer and The No.1 IT Certification Dumps 1 Certify For Sure with IT Exam Dumps configure listener rules to route requests to different target groups based on path or host headers. Users can use an Application Load Balancer to improve the availability and performance of their web application. 2. - (Topic 1) A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials tor its Amazon ROS tor MySQL databases across multiple AWS Regions Which solution will meet these requirements with the LEAST operational overhead? A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions Configure Secrets Manager to rotate the secrets on a schedule B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter Use multi-Region secret replication for the required Regions Configure Systems Manager to rotate the secrets on a schedule C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys Store the secrets in an Amazon DynamoDB global table Use an AWS Lambda function to retrieve the secrets from DynamoDB Use the RDS API to rotate the secrets. Answer: A Explanation: https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager- multiple-regions/ 3. - (Topic 1) A company is running a popular social media website. The website gives users the ability to upload images to share with other users. The company wants to make sure that the images do not contain inappropriate content. The company needs a solution that minimizes development effort. What should a solutions architect do to meet these requirements? A. Use Amazon Comprehend to detect inappropriate content. Use human review for low- confidence The No.1 IT Certification Dumps 2 Certify For Sure with IT Exam Dumps predictions. B. Use Amazon Rekognition to detect inappropriate content. Use human review for low- confidence predictions. C. Use Amazon SageMaker to detect inappropriate content. Use ground truth to label low- confidence predictions. D. Use AWS Fargate to deploy a custom machine learning model to detect inappropriate content. Use ground truth to label low-confidence predictions. Answer: B Explanation: https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html?pg=ln&sec=ft https://docs.aws.amazon.com/rekognition/latest/dg/a2i-rekognition.html 4. - (Topic 1) A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near- real-time solution to share the details of millions of financial transactions with several other internal applications Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval. What should a solutions architect recommend to meet these requirements? A. Store the transactions data into Amazon DynamoDB Set up a rule in DynamoDB to remove sensitive data from every transaction upon write Use DynamoDB Streams to share the transactions data with other applications B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3 Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3 C. Stream the transactions data into Amazon Kinesis Data Streams Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB Other applications can consume the transactions data off the Kinesis data stream. D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and The No.1 IT Certification Dumps 3 Certify For Sure with IT Exam Dumps remove sensitive data before updating the files in Amazon S3 The Lambda function then stores the data in Amazon DynamoDB Other applications can consume transaction files stored in Amazon S3. Answer: C Explanation: The destination of your Kinesis Data Firehose delivery stream. Kinesis Data Firehose can send data records to various destinations, including Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, and any HTTP endpoint that is owned by you or any of your third-party service providers. The following are the supported destinations: * Amazon OpenSearch Service * Amazon S3 * Datadog * Dynatrace * Honeycomb * HTTP Endpoint * Logic Monitor * MongoDB Cloud * New Relic * Splunk * Sumo Logic https://docs.aws.amazon.com/firehose/latest/dev/create-name.html https://aws.amazon.com/kinesis/data-streams/ Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. 5. - (Topic 1) A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to The No.1 IT Certification Dumps 4 Certify For Sure with IT Exam Dumps improve the performance as much as possible. Which solution will meet these requirements with the LEAST operational overhead? A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule's target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target. D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. Answer: B Explanation: Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift, in just a few clicks. https://aws.amazon.com/appflow/ 6. - (Topic 1) A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for 4 years before they can be deleted Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days Which storage solution is MOST cost-effective? A. Create an S3 bucket lifecycle policy to move Mm from S3 Standard to S3 Glacier 30 days from object The No.1 IT Certification Dumps 5 Certify For Sure with IT Exam Dumps creation Delete the Tiles 4 years after object creation B. Create an S3 bucket lifecycle policy to move tiles from S3 Standard to S3 One Zone- infrequent Access (S3 One Zone-IA] 30 days from object creation. Delete the fees 4 years after object creation C. Create an S3 bucket lifecycle policy to move files from S3 Standard-infrequent Access (S3 Standard -lA) 30 from object creation. Delete the ties 4 years after object creation D. Create an S3 bucket Lifecycle policy to move files from S3 Standard to S3 Standard- Infrequent Access (S3 Standard-IA) 30 days from object creation Move the files to S3 Glacier 4 years after object carton. Answer: B Explanation: https://aws.amazon.com/s3/storage-classes/?trk=66264cd8-3b73-416c-9693-ea7cf4fe846a&sc_channel= ps&s_kwcid=AL!4422!3!536452716950!p!!g!!aws%20s3%20pri cing&ef_id=Cj0KCQjwnbmaBhD- ARIsAGTPcfVHUZN5_BMrzl5zBcaC8KnqpnNZvjbZzqPkH6k7q4JcYO5KFLx0YYgaAm6nE ALw_wcB:G:s&s_kwcid=AL!4422!3!536452716950!p!!g!!aws%20s3%20pricing 7. - (Topic 1) A company hosts its web applications in the AWS Cloud. The company configures Elastic Load Balancers to use certificate that are imported into AWS Certificate Manager (ACM). The company’s security team must be notified 30 days before the expiration of each certificate. What should a solutions architect recommend to meet the requirement? A. Add a rule m ACM to publish a custom message to an Amazon Simple Notification Service (Amazon SNS) topic every day beginning 30 days before any certificate will expire. B. Create an AWS Config rule that checks for certificates that will expire within 30 days. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke a custom alert by way of Amazon Simple Notification Service (Amazon SNS) when AWS Config reports a noncompliant resource C. Use AWS trusted Advisor to check for certificates that will expire within to days. Create an Amazon CloudWatch alarm that is based on Trusted Advisor metrics for check status changes Configure the alarm to send a custom alert by way of Amazon Simple rectification Service (Amazon SNS) D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to detect any certificates that will expire within 30 days. Configure the rule to invoke an AWS Lambda function. Configure the Lambda function to send a custom alert by way of Amazon Simple Notification Service (Amazon SNS). The No.1 IT Certification Dumps 6 Certify For Sure with IT Exam Dumps Answer: B Explanation: https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate- expiration/ 8. - (Topic 1) A company observes an increase in Amazon EC2 costs in its most recent bill The billing team notices unwanted vertical scaling of instance types for a couple of EC2 instances A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth analysis to identify the root cause of the vertical scaling How should the solutions architect generate the information with the LEAST operational overhead? A. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types C. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket Use Amazon QuickSight with Amazon S3 as a source to generate an interactive graph based on instance types. Answer: B Explanation: AWS Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You can view data for up to the last 12 months, forecast how much you're likely to spend for the next 12 months, and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to identify areas that need further inquiry and see trends that you can use to understand your costs. https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html 9. - (Topic 1) A company has an application that ingests incoming messages. These messages are then quickly consumed by dozens of other applications and microservices. The number of messages varies drastically and sometimes spikes as high as 100,000 each second. The company wants to decouple the solution and increase scalability. The No.1 IT Certification Dumps 7 Certify For Sure with IT Exam Dumps Which solution meets these requirements? A. Persist the messages to Amazon Kinesis Data Analytics. All the applications will read and process the messages. B. Deploy the application on Amazon EC2 instances in an Auto Scaling group, which scales the number of EC2 instances based on CPU metrics. C. Write the messages to Amazon Kinesis Data Streams with a single shard. All applications will read from the stream and process the messages. D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All applications then process the messages from the queues. Answer: D Explanation: https://aws.amazon.com/sqs/features/ By routing incoming requests to Amazon SQS, the company can decouple the job requests from the processing instances. This allows them to scale the number of instances based on the size of the queue, providing more resources when needed. Additionally, using an Auto Scaling group based on the queue size will automatically scale the number of instances up or down depending on the workload. Updating the software to read from the queue will allow it to process the job requests in a more efficient manner, improving the performance of the system. 10. - (Topic 1) A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in size. Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation. What should a solutions architect do to meet these requirements with the LEAST development effort? A. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan me objects in the bucket. If objects contain Pll. trigger an S3 Lifecycle policy to remove the objects that contain Pll. The No.1 IT Certification Dumps 8 Certify For Sure with IT Exam Dumps B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain Pll. Use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects mat contain Pll. C. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. It objects contain Rll. use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain Pll. D. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain Pll. use Amazon Simple Email Service (Amazon STS) to trigger a notification to the administrators and trigger on S3 Lifecycle policy to remove the objects mot contain PII. Answer: B Explanation: To meet the requirements of detecting and alerting the administrators when PII is shared and automating remediation with the least development effort, the best approach would be to use Amazon S3 bucket as a secure transfer point and scan the objects in the bucket with Amazon Macie. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect sensitive data stored in Amazon S3. It can be used to classify sensitive data, monitor access to sensitive data, and automate remediation actions. In this scenario, after uploading the files to the Amazon S3 bucket, the objects can be scanned for PII by Amazon Macie, and if it detects any PII, it can trigger an Amazon Simple Notification Service (SNS) notification to alert the administrators to remove the objects containing PII. This approach requires the least development effort, as Amazon Macie already has pre-built data classification rules that can detect PII in various formats. Hence, option B is the correct answer. References: ✑ Amazon Macie User Guide: https://docs.aws.amazon.com/macie/latest/userguide/what-is-macie.html ✑ AWS Well-Architected Framework - Security Pillar: https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html 11. - (Topic 1) A company hosts more than 300 global websites and applications. The company requires a platform to The No.1 IT Certification Dumps 9 Certify For Sure with IT Exam Dumps analyze more than 30 TB of clickstream data each day. What should a solutions architect do to transmit and process the clickstream data? A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis C. Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to process the data tor analysis. D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis Answer: D Explanation: https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/ 12. - (Topic 1) A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC The EC2 instances run inside several subnets across multiple Availability Zones. The EC2 instances do not communicate with each other However, the EC2 instances download images from Amazon S3 and upload images to Amazon S3 through a single NAT gateway The company is concerned about data transfer charges What is the MOST cost-effective way for the company to avoid Regional data transfer charges? A. Launch the NAT gateway in each Availability Zone B. Replace the NAT gateway with a NAT instance C. Deploy a gateway VPC endpoint for Amazon S3 D. Provision an EC2 Dedicated Host to run the EC2 instances Answer: A Explanation: In this scenario, the company wants to avoid regional data transfer charges while downloading and uploading images from Amazon S3. To accomplish this at the lowest cost, the NAT gateway should be launched in each availability zone that the EC2 instances are running in. This allows the EC2 instances to route traffic through the local NAT gateway instead of sending traffic across an availability zone boundary The No.1 IT Certification Dumps 10 Certify For Sure with IT Exam Dumps and incurring regional data transfer fees. This method will help reduce the data transfer costs since inter- Availability Zone data transfers in a single region are free of charge. Reference: AWS NAT Gateway documentation: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html 13. - (Topic 1) A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users. Which solution meets these requirements? A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection. C. Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to AWS each day. D. Submit a support ticket through the AWS Management Console Request the removal of S3 service limits from the account. Answer: B Explanation: To address the issue of bandwidth limitations on the company's on-premises application, and to minimize the impact on internal user connectivity, a new AWS Direct Connect connection should be established to direct backup traffic through this new connection. This solution will offer a secure, high-speed connection between the company's data center and AWS, which will allow the company to transfer data quickly without consuming internet bandwidth. Reference: AWS Direct Connect documentation: https://aws.amazon.com/directconnect/ 14. - (Topic 1) A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to The No.1 IT Certification Dumps 11 Certify For Sure with IT Exam Dumps make sure that the catalog is highly available and that the catalog is stored in a durable location. What should a solutions architect do to meet these requirements? A. Move the catalog to Amazon ElastiCache for Redis. B. Deploy a larger EC2 instance with a larger instance store. C. Move the catalog from the instance store to Amazon S3 Glacier Deep Archive. D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system. Answer: D Explanation: Moving the catalog to an Amazon Elastic File System (Amazon EFS) file system provides both high availability and durability. Amazon EFS is a fully-managed, highly-available, and durable file system that is built to scale on demand. With Amazon EFS, the catalog data can be stored and accessed from multiple EC2 instances in different availability zones, ensuring high availability. Also, Amazon EFS automatically stores files redundantly within and across multiple availability zones, making it a durable storage option. 15. - (Topic 1) A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the compute and memory attributes of the DB instance. Which solution meets these requirements MOST cost-effectively? A. Stop the DB instance when tests are completed. Restart the DB instance when required. B. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed. C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required. D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required. Answer: A Explanation: To reduce the cost of running the tests without reducing the compute and memory attributes of the Amazon RDS for MySQL DB instance, the development team can stop the instance when tests are completed and The No.1 IT Certification Dumps 12 Certify For Sure with IT Exam Dumps restart it when required. Stopping the DB instance when not in use can help save costs because customers are only charged for storage while the DB instance is stopped. During this time, automated backups and automated DB instance maintenance are suspended. When the instance is restarted, it retains the same configurations, security groups, and DB parameter groups as when it was stopped. Reference: Amazon RDS Documentation: Stopping and Starting a DB instance (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html) 16. - (Topic 1) A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets. A solutions architect needs to Integrate the web application with the appliance to inspect all traffic to the application before the traffic teaches the web server. Which solution will moot these requirements with the LEAST operational overhead? A. Create a Network Load Balancer the public subnet of the application's VPC to route the traffic lo the appliance for packet inspection B. Create an Application Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection C. Deploy a transit gateway m the inspection VPC Configure route tables to route the incoming pockets through the transit gateway D. Deploy a Gateway Load Balancer in the inspection VPC Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance Answer: D Explanation: https://aws.amazon.com/blogs/networking-and-content-delivery/scaling-network-traffic-inspection-using-aw s-gateway-load-balancer/ The No.1 IT Certification Dumps 13 Certify For Sure with IT Exam Dumps 17. - (Topic 1) A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53. What should a solutions architect do to meet these requirements? A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins Configure Route 53 to route traffic to the CloudFront distribution. B. Create an Amazon CloudFront distribution that has the ALB as an origin Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Configure Route 53 to route traffic to the CloudFront distribution. C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints Create a custom domain name that points to the accelerator DNS name Use the custom domain name as an endpoint for the web application. D. Create an Amazon CloudFront distribution that has the ALB as an origin C. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content, Point the other domain name to the accelerator DNS name for static content Use the domain names as endpoints for the web application. Answer: C Explanation: Static content can be cached at Cloud front Edge locations from S3 and dynamic content EC2 behind the ALB whose performance can be improved by Global Accelerator whose one endpoint is ALB and other Cloud front. So with regards to custom domain name endpoint is web application is R53 alias records for the custom domain point to web application https://aws.amazon.com/blogs/networking-and-content-delivery/improving-availability-and-performance-for -application-load-balancers-using-one-click-integration- with-aws-global-accelerator/ 18. - (Topic 1) A company is migrating a distributed application to AWS The application serves variable workloads The The No.1 IT Certification Dumps 14 Certify For Sure with IT Exam Dumps legacy platform consists of a primary server trial coordinates jobs across multiple compute nodes The company wants to modernize the application with a solution that maximizes resiliency and scalability. How should a solutions architect design the architecture to meet these requirements? A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 Instances that are managed in an Auto Scaling group Configure EC2 Auto Scaling based on the size of the queue C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed In an Auto Scaling group. Configure AWS CloudTrail as a destination for the fobs Configure EC2 Auto Scaling based on the load on the primary server D. implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs Configure EC2 Auto Scaling based on the load on the compute nodes Answer: B Explanation: To maximize resiliency and scalability, the best solution is to use an Amazon SQS queue as a destination for the jobs. This decouples the primary server from the compute nodes, allowing them to scale independently. This also helps to prevent job loss in the event of a failure. Using an Auto Scaling group of Amazon EC2 instances for the compute nodes allows for automatic scaling based on the workload. In this case, it's recommended to configure the Auto Scaling group based on the size of the Amazon SQS queue, which is a better indicator of the actual workload than the load on the primary server or compute nodes. This approach ensures that the application can handle variable workloads, while also minimizing costs by automatically scaling up or down the compute nodes as needed. 19. - (Topic 1) A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the files infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-old as quickly as possible. A The No.1 IT Certification Dumps 15 Certify For Sure with IT Exam Dumps delay in retrieving older files is acceptable. Which solution will meet these requirements MOST cost-effectively? A. Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval. B. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select. C. Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata from Amazon S3. D. Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive. Answer: B Explanation: "For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5- 12 hours." https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-s3-glacier-instant-retrieval-storage-class/ 20. - (Topic 1) A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket. What should the solutions architect do to meet this requirement? A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances. B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances. The No.1 IT Certification Dumps 16 Certify For Sure with IT Exam Dumps C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances. D. Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances. Answer: A Explanation: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3- bucket/ 21. - (Topic 1) A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth. Which solution will meet these requirements? A. Create an S3 bucket Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket. B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3. C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway Create an S3 bucket Create a new NFS file share on the S3 File Gateway Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway. D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interlace (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway. Answer: B Explanation: The basic difference between Snowball and Snowball Edge is the capacity they provide. Snowball provides a total of 50 TB or 80 TB, out of which 42 TB or 72 TB is available, while Amazon Snowball Edge provides 100 TB, out of which 83 TB is available. The No.1 IT Certification Dumps 17 Certify For Sure with IT Exam Dumps 22. - (Topic 1) A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third- party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability. What should a solutions architect do to meet these requirements? A. Create an AWS Lambda function to apply the patch to all EC2 instances. B. Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances. C. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances. D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances. Answer: B Explanation: https://docs.aws.amazon.com/systems-manager/latest/userguide/about-windows-app-patching.html 23. - (Topic 1) A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its AWS resources and record a history of API calls made to these resources. What should a solutions architect do to meet these requirements? A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls Answer: B Explanation: AWS Config is a fully managed service that allows the company to assess, audit, and evaluate the configurations of its AWS resources. It provides a detailed inventory of the resources in use and tracks changes to resource configurations. AWS Config can detect configuration changes and alert the company when changes occur. It also provides a historical view of changes, which is essential for compliance and governance purposes. AWS CloudTrail is a fully managed service that provides a detailed history of API The No.1 IT Certification Dumps 18 Certify For Sure with IT Exam Dumps calls made to the company's AWS resources. It records all API activity in the AWS account, including who made the API call, when the call was made, and what resources were affected by the call. This information is critical for security and auditing purposes, as it allows the company to investigate any suspicious activity that might occur on its AWS resources. 24. - (Topic 1) A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning. Which combination of steps should a solutions architect take to meet these requirements? (Choose two.) A. Configure the application to send the data to Amazon Kinesis Data Firehose. B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email. C. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application's API for the data. D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the data. E. Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by Answer: B,D Explanation: https://docs.aws.amazon.com/ses/latest/dg/send-email-formatted.html * D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the data. This step can be done using AWS Lambda to extract the shipping statistics and organize the data into an HTML format. * B. Use Amazon Simple Email Service (Amazon SES) to format the data and send the report by email. This step can be done by using Amazon SES to send the report to multiple email addresses at the same time every morning. Therefore, options D and B are the correct choices for this question. Option A is incorrect because Kinesis Data Firehose is not necessary for this use case. Option C is incorrect because AWS Glue is not required to query the application's API. Option E is incorrect because S3 event notifications cannot be used to send the The No.1 IT Certification Dumps 19 Certify For Sure with IT Exam Dumps report by email. 25. - (Topic 1) A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images Which method is the MOST cost-effective for hosting the website? A. Containerize the website and host it in AWS Fargate. B. Create an Amazon S3 bucket and host the website there C. Deploy a web server on an Amazon EC2 instance to host the website. D. Configure an Application Loa d Balancer with an AWS Lambda target that uses the Express js framework. Answer: B Explanation: In Static Websites, Web pages are returned by the server which are prebuilt. They use simple languages such as HTML, CSS, or JavaScript. There is no processing of content on the server (according to the user) in Static Websites. Web pages are returned by the server with no change therefore, static Websites are fast. There is no interaction with databases. Also, they are less costly as the host does not need to support server-side processing with different languages. ============ In Dynamic Websites, Web pages are returned by the server which are processed during runtime means they are not prebuilt web pages but they are built during runtime according to the user’s demand. These use server-side scripting languages such as PHP, Node.js, ASP.NET and many more supported by the server. So, they are slower than static websites but updates and interaction with databases are possible. 26. - (Topic 1) A company runs an on-premises application that is powered by a MySQL database The company is migrating the application to AWS to Increase the application's elasticity and availability The No.1 IT Certification Dumps 20 Certify For Sure with IT Exam Dumps The current architecture shows heavy read activity on the database during times of normal operation Every 4 hours the company's development team pulls a full export of the production database to populate a database in the staging environment During this period, users experience unacceptable application latency The development team is unable to use the staging environment until the procedure completes A solutions architect must recommend replacement architecture that alleviates the application latency issue The replacement architecture also must give the development team the ability to continue using the staging environment without delay Which solution meets these requirements? A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility. B. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production Use database cloning to create the staging database on-demand C. Use Amazon RDS for MySQL with a Mufti AZ deployment and read replicas for production Use the standby instance tor the staging database. D. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility. Answer: B Explanation: https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/ 27. - (Topic 1) A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS A custom application in the company's data center runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and needs to begin the transfer process as soon as possible. The data center does not have any available network bandwidth for additional workloads A solutions architect must transfer the data and must configure the transformation job to continue to run in the AWS Cloud Which solution will meet these requirements with the LEAST operational overhead? A. Use AWS DataSync to move the data Create a custom transformation job by using AWS Glue B. Order an AWS Snowcone device to move the data Deploy the transformation application to the device The No.1 IT Certification Dumps 21 Certify For Sure with IT Exam Dumps C. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using AWS Glue D. Order an AWS D. Snowball Edge Storage Optimized device that includes Amazon EC2 compute Copy the data to the device Create a new EC2 instance on AWS to run the transformation application Answer: D Explanation: AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can do local processing and edge- computing workloads in addition to transferring data between your local environment and the AWS Cloud1. Users can order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute to move 50 TB of data from on premises to AWS. The Storage Optimized device has 80 TB of usable storage and 40 vCPUs of compute power2. Users can copy the data to the device using the AWS OpsHub graphical user interface or the Snowball client command line tool3. Users can also create and run Amazon EC2 instances on the device using Amazon Machine Images (AMIs) that are compatible with the sbe1 instance type. Users can use the Snowball Edge device to transfer the data and run the transformation job locally without using any network bandwidth. Users can also create a new EC2 instance on AWS to run the transformation application after the data transfer is complete. Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. Users can launch an EC2 instance in the same AWS Region where they send their Snowball Edge device and choose an AMI that matches their application requirements. Users can use the EC2 instance to continue running the transformation job in the AWS Cloud. 28. - (Topic 1) A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that run in an Auto Scaling group. The company has deployments across multiple AWS Regions. The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions. Which solution will meet these requirements? A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with The No.1 IT Certification Dumps 22 Certify For Sure with IT Exam Dumps the Auto Scaling group. Use the NLB as an AWS Global Accelerator endpoint in each Region. B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Use the ALB as an AWS Global Accelerator endpoint in each Region. C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as an origin. D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 weighted record that points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the weighted record as an origin. Answer: D Explanation: https://aws.amazon.com/global-accelerator/faqs/ HTTP /HTTPS - ALB ; TCP and UDP - NLB; Lowest latency routing and more throughput. Also supports failover, uses Anycast Ip addressing - Global Accelerator Caching at Egde Locations – Cloutfront WS Global Accelerator automatically checks the health of your applications and routes user traffic only to healthy application endpoints. If the health status changes or you make configuration updates, AWS Global Accelerator reacts instantaneously to route your users to the next available endpoint.. 29. - (Topic 1) A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks. Which solution meets these requirements? A. Enable Amazon GuardDuty on the account. B. Enable Amazon Inspector on the EC2 instances. C. Enable AWS Shield and assign Amazon Route 53 to it. D. Enable AWS Shield Advanced and assign the ELB to it. Answer: D Explanation: The No.1 IT Certification Dumps 23 Certify For Sure with IT Exam Dumps https://aws.amazon.com/shield/faqs/ 30. - (Topic 1) A company's HTTP application is behind a Network Load Balancer (NLB). The NLB's target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service. The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application's availability without writing custom scripts or code. What should a solutions architect do to meet these requirements? A. Enable HTTP health checks on the NLB. supplying the URL of the company's application. B. Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected, the application will restart. C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's application. Configure an Auto Scaling action to replace unhealthy instances. D. Create an Amazon Cloud Watch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state. Answer: C Explanation: Application availability: NLB cannot assure the availability of the application. This is because it bases its decisions solely on network and TCP-layer variables and has no awareness of the application at all. Generally, NLB determines availability based on the ability of a server to respond to ICMP ping or to correctly complete the three-way TCP handshake. ALB goes much deeper and is capable of determining availability based on not only a successful HTTP GET of a particular page but also the verification that the content is as was expected based on the input parameters. 31. - (Topic 1) A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files. Which storage option meets these requirements? The No.1 IT Certification Dumps 24 Certify For Sure with IT Exam Dumps A. S3 Standard B. S3 Intelligent-Tiering C. S3 Standard-Infrequent Access {S3 Standard-IA) D. S3 One Zone-Infrequent Access (S3 One Zone-IA) Answer: B Explanation: S3 Intelligent-Tiering - Perfect use case when you don't know the frequency of access or irregular patterns of usage. Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard- Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone- IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. If you have data residency requirements that can’t be met by an existing AWS Region, you can use the S3 Outposts storage class to store your S3 data on- premises. Amazon S3 also offers capabilities to manage your data throughout its lifecycle. Once an S3 Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application. https://aws.amazon.com/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/?nc1 =h_ls 32. - (Topic 1) A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket Queries will be simple and will run on- demand A solutions architect needs to perform the analysis with minimal changes to the existing architecture What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead? A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed B. Use Amazon CloudWatch Logs to store the logs Run SQL queries as needed from the Amazon CloudWatch console C. Use Amazon Athena directly with Amazon S3 to run the queries as needed D. Use AWS Glue to catalog the logs Use a transient Apache Spark cluster on Amazon EMR to run the The No.1 IT Certification Dumps 25 Certify For Sure with IT Exam Dumps SQL queries as needed Answer: C Explanation: Amazon Athena can be used to query JSON in S3 33. - (Topic 1) A company wants to run its critical applications in containers to meet requirements tor scalability and availability The company prefers to focus on maintenance of the critical applications The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized workload What should a solutions architect do to meet those requirements? A. Use Amazon EC2 Instances, and Install Docker on the Instances B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate D. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)- op6mized Amazon Machine Image (AMI). Answer: C Explanation: using AWS ECS on AWS Fargate since they requirements are for scalability and availability without having to provision and manage the underlying infrastructure to run the containerized workload. https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html 34. - (Topic 1) A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from around the world. The company must decrease latency for users who access the website. Which solution meets these requirements MOST cost-effectively? A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries. B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators. The No.1 IT Certification Dumps 26 Certify For Sure with IT Exam Dumps C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution. D. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint. Answer: C Explanation: Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations around the world, providing low latency and high transfer speeds to users accessing the content. Adding a CloudFront distribution in front of the S3 bucket will cache the static website's content at edge locations around the world, decreasing latency for users accessing the website. This solution is also cost-effective as it only charges for the data transfer and requests made by users accessing the content from the CloudFront edge locations. Additionally, this solution provides scalability and reliability benefits as CloudFront can automatically scale to handle increased demand and provide high availability for the website. 35. - (Topic 1) A company hosts an application on AWS Lambda functions mat are invoked by an Amazon API Gateway API The Lambda functions save customer data to an Amazon Aurora MySQL database Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is complete The result is that customer data Is not recorded for some of the event A solutions architect needs to design a solution that stores customer data that is created during database upgrades Which solution will meet these requirements? A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database Configure the Lambda functions to connect to the RDS proxy B. Increase the run time of me Lambda functions to the maximum Create a retry mechanism in the code that stores the customer data in the database C. Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to the database. D. Store the customer data m an Amazon Simple Queue Service (Amazon SOS) FIFO queue Create a new Lambda function that polls the queue and stores the customer data in the database Answer: D The No.1 IT Certification Dumps 27 Certify For Sure with IT Exam Dumps Explanation: https://www.learnaws.org/2020/12/13/aws-rds-proxy-deep-dive/ RDS proxy can improve application availability in such a situation by waiting for the new database instance to be functional and maintaining any requests received from the application during this time. The end result is that the application is more resilient to issues with the underlying database. This will enable solution to hold data till the time DB comes back to normal. RDS proxy is to optimally utilize the connection between Lambda and DB. Lambda can open multiple connection concurrently which can be taxing on DB compute resources, hence RDS proxy was introduced to manage and leverage these connections efficiently. 36. - (Topic 1) A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received. Which solution will meet these requirements? A. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing. B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing. C. Use an API Gateway authorizer to block any requests while the application processes an order. D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing. Answer: B Explanation: To ensure that orders are processed in the order that they are received, the best solution is to use an Amazon SQS FIFO (First-In-First-Out) queue. This type of queue maintains the exact order in which messages are sent and received. In this case, the application can send information about new orders to an Amazon API Gateway REST API, which can then use an API Gateway integration to send a message to an Amazon SQS FIFO queue for processing. The queue can then be configured to invoke an AWS Lambda The No.1 IT Certification Dumps 28 Certify For Sure with IT Exam Dumps function to perform the necessary processing on each order. This ensures that orders are processed in the exact order in which they are received. 37. - (Topic 1) A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows The database has 2 TB of General Purpose SSD storage There are millions of updates against this data every day through the company's website The company has noticed that some insert operations are taking 10 seconds or longer The company has determined that the database storage performance is the problem Which solution addresses this performance issue? A. Change the storage type to Provisioned IOPS SSD B. Change the DB instance to a memory optimized instance class C. Change the DB instance to a burstable performance instance class D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication. Answer: A Explanation: https://aws.amazon.com/ebs/features/ "Provisioned IOPS volumes are backed by solid-state drives (SSDs) and are the highest performance EBS volumes designed for your critical, I/O intensive database applications. These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require extremely low latency." https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html 38. - (Topic 1) A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database funning on Amazon EC2. The company wants this application to be highly available with tow operational complexity Which architecture otters the HGHEST availability? The No.1 IT Certification Dumps 29 Certify For Sure with IT Exam Dumps A. Add a second ActiveMQ server to another Availably Zone Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone. B. Use Amazon MO with active/standby brokers configured across two Availability Zones Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone. C. Use Amazon MO with active/standby blotters configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon ROS tor MySQL with Multi-AZ enabled. D. Use Amazon MQ with active/standby brokers configured across two Availability Zones Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled. Answer: D Explanation: Amazon S3 is a highly scalable and durable object storage service that can store and retrieve any amount of data from anywhere on the web1. Users can configure the application to upload images directly from each user’s browser to Amazon S3 through the use of a presigned URL. A presigned URL is a URL that gives access to an object in an S3 bucket for a limited time and with a specific action, such as uploading an object2. Users can generate a presigned URL programmatically using the AWS SDKs or AWS CLI. By using a presigned URL, users can reduce coupling within the application and improve website performance, as they do not need to send the images to the web server first. AWS Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying compute resources3. Users can configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. S3 Event Notifications is a feature that allows users to receive notifications when certain events happen in an S3 bucket, such as object creation or deletion. Users can configure S3 Event Notifications to invoke a Lambda function that resizes the image and stores it back in the same or a different S3 bucket. This way, users can offload the image resizing task from the web server to Lambda. 39. - (Topic 1) A company's containerized application runs on an Amazon EC2 instance. The application needs to download security certificates before it can communicate with other business applications. The company wants a highly secure solution to encrypt and decrypt the certificates in near real time. The solution also The No.1 IT Certification Dumps 30 Certify For Sure with IT Exam Dumps needs to store data in highly available storage after the data is encrypted. Which solution will meet these requirements with the LEAST operational overhead? A. Create AWS Secrets Manager secrets for encrypted certificates. Manually update the certificates as needed. Control access to the data by using fine-grained IAM access. B. Create an AWS Lambda function that uses the Python cryptography library to receive and perform encryption operations. Store the function in an Amazon S3 bucket. C. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption operations. Store the encrypted data on Amazon S3. D. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption operations. Store the encrypted data on Amazon Elastic Block Store (Amazon EBS) volumes. Answer: D 40. - (Topic 1) A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance. The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability. Which solution will meet these requirements? A. Use Amazon Redshift with a single node for leader and compute functionality. B. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone. C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas. D. Use Amazon ElastiCache for Memcached with EC2 Spot Instances. Answer: C Explanation: AURORA is 5x performance improvement over MySQL on RDS and handles more read requests than write,; maintaining high availability = Multi-AZ deployment The No.1 IT Certification Dumps 31 Certify For Sure with IT Exam Dumps 41. - (Topic 1) A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS. What should a solutions architect do to meet this requirement? A. Update the ALB's network ACL to accept only HTTPS traffic B. Create a rule that replaces the HTTP in the URL with HTTPS. C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS. D. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI). Answer: C Explanation: https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-http-to-https- using-alb/ How can I redirect HTTP requests to HTTPS using an Application Load Balancer? Last updated: 2020-10-30 I want to redirect HTTP requests to HTTPS using Application Load Balancer listener rules. How can I do this? Resolution Reference: https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-http-to-https- using-alb/ 42. - (Topic 1) A company needs to store data in Amazon S3 and must prevent the data from being changed. The company wants new objects that are uploaded to Amazon S3 to remain unchangeable for a nonspecific amount of time until the company decides to modify the objects. Only specific users in the company’s AWS account can have the ability to delete the objects. What should a solutions architect do to meet these requirements? A. Create an S3 Glacier vault Apply a write-once, read-many (WORM) vault lock policy to the objects B. Create an S3 bucket with S3 Object Lock enabled Enable versioning Set a retention period of 100 years Use governance mode as the S3 bucket's default retention mode for new objects C. Create an S3 bucket Use AWS CloudTrail to (rack any S3 API events that modify the objects Upon notification, restore the modified objects from any backup versions that the company has D. Create an S3 bucket with S3 Object Lock enabled Enable versioning Add a legal hold to the objects Add The No.1 IT Certification Dumps 32 Certify For Sure with IT Exam Dumps the s3 PutObjectLegalHold permission to the IAM policies of users who need to delete the objects Answer: D Explanation: "The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until removed." https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html 43. - (Topic 1) A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company’s product manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solution architect must provide access to the product manager by following the principle of least privilege. Which solution will meet these requirements? A. Share the dashboard from the CloudWatch console. Enter the product manager’s email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager. B. Create an IAM user specifically for the product manager. Attach the CloudWatch Read Only Access managed policy to the user. Share the new login credential with the product manager. Share the browser URL of the correct dashboard with the product manager. C. Create an IAM user for the company’s employees, Attach the View Only Access AWS managed policy to the IAM user. Share the new login credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in the Dashboards section. D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP credentials. On the bastion server, ensure that the browser is configured to open the dashboard URL with cached AWS credentials that have appropriate permissions to view the dashboard. Answer: B Explanation: To provide the product manager access to the Amazon CloudWatch dashboard while following the principle of least privilege, a solution architect should create an IAM user specifically for the product manager and The No.1 IT Certification Dumps 33 Certify For Sure with IT Exam Dumps attach the CloudWatch Read Only Access managed policy to the user. This policy allows the user to view the dashboard without being able to make any changes to it. The solution architect should then share the new login credential with the product manager and provide them with the browser URL of the correct dashboard. 44. - (Topic 1) A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone placing both behind an Application Load Balancer After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time. What should a solutions architect propose to ensure users see all of their documents at once? A. Copy the data so both EBS volumes contain all the documents. B. Configure the Application Load Balancer to direct a user to the server with the documents C. Copy the data from both EBS volumes to Amazon EFS Modify the application to save new documents to Amazon EFS D. Configure the Application Load Balancer to send the request to both servers Return each document from the correct server. Answer: C Explanation: https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html#how-it-works-ec2 45. - (Topic 1) A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with Active Directory for access control. Which solution will satisfy these requirements? A. Configure Amazon EFS storage and set the Active Directory domain for authentication The No.1 IT Certification Dumps 34 Certify For Sure with IT Exam Dumps B. Create an SMB Me share on an AWS Storage Gateway tile gateway in two Availability Zones C. Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication Answer: D 46. - (Topic 1) A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata. The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base. Which solution meats these requirements? A. Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB. B. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata. C. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata. D. Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata. Answer: C Explanation: https://www.quora.com/How-can-I-use-DynamoDB-for-storing-metadata-for-Amazon-S3-objects This solution meets the requirements of scalability, performance, and availability. AWS Lambda can process the photos in parallel and scale up or down automatically depending on the demand. Amazon S3 can store the photos and metadata reliably and durably, and provide high availability and low latency. DynamoDB can store the metadata efficiently and provide consistent performance. This solution also reduces the cost and complexity of managing EC2 instances and EBS volumes. Option A is incorrect because storing the photos in DynamoDB is not a good practice, as it can increase the The No.1 IT Certification Dumps 35 Certify For Sure with IT Exam Dumps storage cost and limit the throughput. Option B is incorrect because Kinesis Data Firehose is not designed for processing photos, but for streaming data to destinations such as S3 or Redshift. Option D is incorrect because increasing the number of EC2 instances and using Provisioned IOPS SSD volumes does not guarantee scalability, as it depends on the load balancer and the application code. It also increases the cost and complexity of managing the infrastructure. References: ✑ https://aws.amazon.com/certification/certified-solutions-architect-professional/ ✑ https://www.examtopics.com/discussions/amazon/view/7193-exam-aws-certified-solutions-architect-profes sional-topic-1/ ✑ https://aws.amazon.com/architecture/ 47. - (Topic 1) An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3 bucket. The company has set up S3 event notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS) standard queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users through email. Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are invoking the Lambda function more than once, resulting in multiple email messages. What should the solutions architect do to resolve this issue with the LEAST operational overhead? A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds. B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages. C. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window timeout. D. Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing. Answer: C The No.1 IT Certification Dumps 36 Certify For Sure with IT Exam Dumps 48. - (Topic 1) A company wants to migrate an on-premises data center to AWS. The data canter hosts an SFTP server that stores its data on an NFS-based file system. The server holds 200 GB of data that needs to be transferred. The server must be hosted on an Amazon EC2 instance that uses an Amazon Elastic File System (Amazon EFS) file system When combination of steps should a solutions architect take to automate this task? (Select TWO ) A. Launch the EC2 instance into the same Avalability Zone as the EFS fie system B. install an AWS DataSync agent m the on-premises data center C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance tor the data D. Manually use an operating system copy command to push the data to the EC2 instance E. Use AWS DataSync to create a suitable location configuration for the onprermises SFTP server Answer: B,E Explanation: AWS DataSync is an online data movement and discovery service that simplifies data migration and helps users quickly, easily, and securely move their file or object data to, from, and between AWS storage services1. Users can use AWS DataSync to transfer data between on-premises and AWS storage services. To use AWS DataSync, users need to install an AWS DataSync agent in the on-premises data center. The agent is a software appliance that connects to the source or destination storage system and handles the data transfer to or from AWS over the network2. Users also need to use AWS DataSync to create a suitable location configuration for the on-premises SFTP server. A location is a logical representation of a storage system that contains files or objects that users want to transfer using DataSync. Users can create locations for NFS shares, SMB shares, HDFS file systems, self-managed object storage, Amazon S3 buckets, Amazon EFS file systems, Amazon FSx for Windows File Server file systems, Amazon FSx for Lustre file systems, Amazon FSx for OpenZFS file systems, Amazon FSx for NetApp ONTAP file systems, and AWS Snowcone devices3. 49. - (Topic 1) A company has a data ingestion workflow that consists the following: ✑ An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries The No.1 IT Certification Dumps 37 Certify For Sure with IT Exam Dumps ✑ An AWS Lambda function to process the data and record metadata The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function does not ingest the corresponding data unless the company manually reruns the job. Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Select TWO.) A. Configure the Lambda function In multiple Availability Zones. B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe It to me SNS topic. C. Increase the CPU and memory that are allocated to the Lambda function. D. Increase provisioned throughput for the Lambda function. E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue Answer: B,E Explanation: To ensure that the Lambda function ingests all data in the future despite occasional network connectivity issues, the following actions should be taken: ✑ Create an Amazon Simple Queue Service (SQS) queue and subscribe it to the SNS topic. This allows for decoupling of the notification and processing, so that even if the processing Lambda function fails, the message remains in the queue for further processing later. ✑ Modify the Lambda function to read from the SQS queue instead of directly from SNS. This decoupling allows for retries and fault tolerance and ensures that all messages are processed by the Lambda function. Reference: AWS SNS documentation: https://aws.amazon.com/sns/ AWS SQS documentation: https://aws.amazon.com/sqs/ AWS Lambda documentation: https://aws.amazon.com/lambda/ 50. - (Topic 1) A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years. What is the MOST operationally efficient solution that meets these requirements? A. Use DynamoDB point-in-time recovery to back up the table continuously. B. Use AWS Backup to create backup schedules and retention policies for the table. The No.1 IT Certification Dumps 38 Certify For Sure with IT Exam Dumps C. Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket. D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function. Configure the Lambda function to back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket. Answer: C 51. - (Topic 1) An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24 hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours. Which solution will meet these requirements with the LEAST operational overhead? A. Use Amazon S3 to host the full website in different S3 buckets Add Amazon CloudFront distributions Set the S3 buckets as origins for the distributions Store the order data in Amazon S3 B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones Add an Application Load Balancer (ALB) to distribute the website traffic Add another ALB for the backend APIs Store the data in Amazon RDS for MySQL C. Migrate the full application to run in containers Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use the Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic Store the data in Amazon RDS for MySQL D. Use an Amazon S3 bucket to host the website's static content Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin Use Amazon API Gateway and AWS Lambda functions for the backend APIs Store the data in Amazon DynamoDB Answer: D Explanation: To launch a one-deal-a-day website on AWS with millisecond latency during peak hours and with the least operational overhead, the best option is to use an Amazon S3 bucket to host the website's static content, deploy an Amazon CloudFront distribution, set the S3 bucket as the origin, use Amazon API Gateway and AWS Lambda functions for the backend APIs, and store the data in Amazon DynamoDB. This option requires minimal operational overhead and can handle millions of requests each hour with millisecond latency during peak hours. Therefore, option D is the correct answer. The No.1 IT Certification Dumps 39 Certify For Sure with IT Exam Dumps Reference: https://aws.amazon.com/blogs/compute/building-a-serverless-multi-player-game-with-aws-lambda-and-am azon-dynamodb/ 52. - (Topic 1) A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company's management team should have full access to all the visualizations. The rest of the company should have only limited access. Which solution will meet these requirements? A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles. B. Create an analysis in Amazon OuickSighl. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups. C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports. D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PoslgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports. Answer: B Explanation: Amazon QuickSight is a data visualization service that allows you to create interactive dashboards and reports from various data sources, including Amazon S3 and Amazon RDS for PostgreSQL. You can connect all the data sources and create new datasets in QuickSight, and then publish dashboards to visualize the data. You can also share the dashboards with the appropriate users and groups, and control their access levels using IAM roles and permissions. Reference: https://docs.aws.amazon.com/quicksight/latest/user/working-with-data-sources.html 53. - (Topic 1) The No.1 IT Certification Dumps 40 Certify For Sure with IT Exam Dumps A company is implementing a shared storage solution for a media application that is hosted m the AWS Cloud The company needs the ability to use SMB clients to access data The solution must he fully managed. Which AWS solution meets these requirements? A. Create an AWS Storage Gateway volume gateway. Create a file share that uses the required client protocol Connect the application server to the file share. B. Create an AWS Storage Gateway tape gateway Configure (apes to use Amazon S3 Connect the application server lo the tape gateway C. Create an Amazon EC2 Windows instance Install and configure a Windows file share role on the instance. Connect the application server to the file share. D. Create an Amazon FSx for Windows File Server tile system Attach the fie system to the origin server. Connect the application server to the file system Answer: D Explanation: https://aws.amazon.com/fsx/lustre/ Amazon FSx has native support for Windows file system features and for the industry- standard Server Message Block (SMB) protocol to access file storage over a network. https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html 54. - (Topic 1) A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet. Which solution will meet these requirements? A. Configure an S3 interface endpoint. B. Configure an S3 gateway endpoint. C. Create an S3 bucket in a private subnet. D. Create an S3 bucket in the same Region as the EC2 instance. Answer: B Explanation: The No.1 IT Certification Dumps 41 Certify For Sure with IT Exam Dumps https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vp c-endpoints-for-s3 https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html 55. - (Topic 1) A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2 instances. The file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable storage solution that preserves how users currently access the files. What should a solutions architect do to meet these requirements? A. Migrate all the data to Amazon S3 Set up IAM authentication for users to access files B. Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 Instances. C. Extend the file share environment to Amazon FSx for Windows File Server with a Multi- AZ configuration. Migrate all the data to FSx for Windows File Server. D. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS. Answer: C Explanation: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system. https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html 56. - (Topic 1) A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an Amazon EC2 instance in a public subnet of a VPC A solutions architect needs to connect from the on-premises network, through the company's internet connection to the bastion host and to the application servers The solutions architect must make sure that the security groups of all the EC2 instances will allow that access Which combination of steps should the solutions architect take to meet these requirements? (Select TWO) A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances The No.1 IT Certification Dumps 42 Certify For Sure with IT Exam Dumps B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the bastion host Answer: C,D Explanation: https://digitalcloud.training/ssh-into-ec2-in-private-subnet/ 57. - (Topic 1) A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The software that accesses this data requires consistently high I/O performance. A solutions architect needs to minimize the time that is required to clone the production data into the test environment. Which solution will meet these requirements? A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment. B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment. C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots. D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment. Answer: C The No.1 IT Certification Dumps 43 Certify For Sure with IT Exam Dumps Explanation: To clone the production data into the test environment with high I/O performance and without affecting the production environment, the best option is to take EBS snapshots of the production EBS volumes and restore them onto new EBS volumes in the test environment. Then, attach the new EBS volumes to EC2 instances in the test environment. This option minimizes the time required to clone the data and ensures that modifications to the cloned data do not affect the production environment. Therefore, option C is the correct answer. Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html 58. - (Topic 1) A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public interface for its backend microservice APIs. Third-party services consume the APIs securely. The company wants to design its API Gateway URL with the company's domain name and corresponding certificate so that the third-party services can use HTTPS. Which solution will meet these requirements? A. Create stage variables in API Gateway with Name="Endpoint-URL" and Value="Company Domain Name" to overwrite the default URL. Import the pub