AWS CSAA Practice Questions PDF (2020)

Summary

This document is a set of practice questions for the AWS Solutions Architect Associate certification exam, published in 2020. It contains 390 questions organized into six sets. The questions are designed to reflect the difficulty and pattern of the actual AWS certification exam and are intended for training purposes.

Full Transcript

© 2020 Digital Cloud Training 1 GETTING STARTED Welcome Congratulations, you have just gained access to the highest quality practice tests for the AWS Solutions Architect Associate Certification Exam. These practice tests will prepare you thoroughly for the real exam so that you get to pass with...

© 2020 Digital Cloud Training 1 GETTING STARTED Welcome Congratulations, you have just gained access to the highest quality practice tests for the AWS Solutions Architect Associate Certification Exam. These practice tests will prepare you thoroughly for the real exam so that you get to pass with flying colors. There are 6 practice exams with 65 questions each and each set of practice exams includes questions from the four domains of the latest SAA-C02 exam. All 390 practice questions were designed to reflect the difficulty of the real AWS exam. With these Practice Tests, you'll know when you are ready to pass your AWS Solutions Architect Associate exam first time! We recommend re-taking these practice tests until you consistently score 80% or higher - that’s when you’re ready to sit the exam and achieve a great score! If you want easy to pass questions, then these Practice Tests are not for you! Our students love these high- quality practice tests because they match the level of difficulty and exam pattern of the actual certification exam and help them understand the AWS concepts. Students who have recently passed the SAA-C02 exam confirm that these AWS practice questions are the most similar to the real exam. I hope you get great value from this resource that has been well received by our pool of over 100,000 students. Through diligent study of these questions, you will be in the perfect position to ace your AWS Certified Solutions Architect Associate exam first time. Wishing you all the best with your AWS Certification exam. Neal Davis Founder of Digital Cloud Training © 2020 Digital Cloud Training 2 How to best use this resource We have organized the 390 practice questions into 6 sets and each set is repeated once without answers and explanations and once with answers and explanations. This allows you to choose from two methods of preparation. 1. Exam simulation To simulate the exam experience, use the “PRACTICE QUESTIONS ONLY” sets. Grab a pen and paper to record your answers for all 65 questions. After completing each set, check your answers using the “PRACTICE QUESTIONS, ANSWERS & EXPLANATIONS” section. To calculate your total score, sum up the number of correct answers and multiply them by 1.54 (weighting out of 100%) to get your percentage score out of 100%. For example, if you got 50 questions right, the calculation would be 50 x 1.54 = 77%. The pass mark of the official AWS exam is 72%. 2. Training mode To use the practice questions as a learning tool, use the “PRACTICE QUESTIONS, ANSWERS & EXPLANATIONS” sets to view the answers and read the in-depth explanations as you move through the questions. Key Training Advice AIM FOR A MINIMUM SCORE OF 80%: Although the actual AWS exam has a pass mark of 72%, we recommend that you repeatedly retake our AWS practice exams until you consistently score 80% or higher. We encourage you to put in the work and study the explanations in detail. Once you achieve the recommended score in the practice tests - you are ready to sit the exam and achieve a great score! FAMILIARIZE YOURSELF WITH THE QUESTION STYLE: Using our AWS practice exams helps you gain experience with the test question format and exam approach for the latest SAA-C02 exam. You'll become intimately familiar with how the questions in the real AWS exam are structured and will be adequately prepared for the real AWS exam experience. DEEPEN YOUR KNOWLEDGE: Please note that though we match the AWS exam pattern, our AWS practice exams are NOT brain dumps. Don’t expect to pass the real AWS certification exam by simply memorizing answers. Instead, we encourage you to use these practice tests to deepen your knowledge. This is your best chance to successfully pass your exam - no matter what questions you are presented with. Your Pathway to Success Instructor-led Video Course If you’re new to AWS, we’d suggest first enrolling in the online instructor-led AWS Certified Solutions Architect Associate Hands-on Labs Video Course from Digital Cloud Training to familiarize yourself with the AWS platform before assessing your exam readiness with these practice exams. © 2020 Digital Cloud Training 3 Online practice exam simulator If you are looking for more practice questions online, enrol in the practice exam course from Digital Cloud Training. Our online Practice Exams are delivered in 4 different variations: Exam Mode In exam simulation mode, you complete one full-length practice exam and answer all 65 questions within the allotted time. You are then presented with a pass / fail score report showing your overall score and performance in each knowledge area to identify your strengths and weaknesses. Training Mode When taking the practice exam in training mode, you will be shown the answers and explanations for every question after clicking “check”. Upon completion of the exam, the score report will show your overall score and performance in each knowledge area. Knowledge Reviews Now that you have identified your strengths and weaknesses, you get to dive deep into specific areas with our knowledge reviews. You are presented with a series of questions focussed on a specific topic. There is no time limit and you can view the answer to each question as you go through them. Final Exam Simulator The exam simulator randomly selects 65 questions from our pool of over 500 unique questions – mimicking the real AWS exam environment. The practice exam has the same format, style, time limit and passing score as the real AWS exam Training Notes Use the Training Notes for the AWS Certified Solutions Architect Associate from Digital Cloud to get a more detailed understanding of the AWS services and focus your study on the knowledge areas where you need to most. Deep dive into the SAA-C02 exam objectives with 300 pages of detailed facts, tables and diagrams to shortcut your time to success. Feedback The AWS platform is evolving quickly, and the exam tracks these changes with a typical lag of around 6 months. We are therefore reliant on student feedback to keep track of what is appearing in the exam. If there are any topics in your exam that weren't covered in our training resources, please provide us with feedback using this form https://digitalcloud.training/student-feedback/. We appreciate any feedback that will help us further improve our AWS training resources. Connect with the AWS Community Our private Facebook group is a great place to ask questions and share knowledge and exam tips with the AWS community. Join the AWS Certification QA group on Facebook and share your exam feedback with the AWS community: https://www.facebook.com/groups/awscertificationqa To join the discussion about all things related to Amazon Web Services on Slack, visit: https://digitalcloud.training/contact/ for instructions. © 2020 Digital Cloud Training 4 Connect with Neal on Social Media To learn about the different ways of connecting with Neal, visit: https://digitalcloud.training/neal-davis youtube.com/c/digitalcloudtraining digitalcloud.training/neal-davis facebook.com/digitalcloudtraining Twitter @nealkdavis linkedin.com/in/nealkdavis Instagram @digitalcloudtraining © 2020 Digital Cloud Training 5 TABLE OF CONTENTS Getting Started................................................................................................................ 2 Welcome.............................................................................................................................................................2 How to best use this resource.............................................................................................................................3 Key Training Advice.............................................................................................................................................3 Your Pathway to Success.....................................................................................................................................3 Feedback..............................................................................................................................................................4 Connect with the AWS Community.....................................................................................................................4 Connect with Neal on Social Media....................................................................................................................5 Table of Contents............................................................................................................ 6 Set 1: Practice Questions only.......................................................................................... 7 Set 1: Practice Questions, Answers & Explanations..........................................................23 Set 2: Practice Questions only.........................................................................................77 Set 2: Practice Questions, Answers & Explanations..........................................................93 Set 3: Practice Questions only....................................................................................... 146 Set 3: Practice Questions, Answers & Explanations........................................................ 161 Set 4: Practice Questions only....................................................................................... 215 Set 4: Practice Questions, Answers & Explanations........................................................ 230 Set 5: Practice Questions only....................................................................................... 280 Set 5: Practice Questions, Answers & Explanations........................................................ 295 Set 6: Practice Questions only....................................................................................... 343 Set 6: Practice Questions, Answers & Explanations........................................................ 358 Conclusion.................................................................................................................... 407 Reach out and Connect.................................................................................................................................. 407 OTHER BOOKS & COURSES BY NEAL DAVIS.................................................................... 408 ABOUT THE AUTHOR.................................................................................................... 411 © 2020 Digital Cloud Training 6 SET 1: PRACTICE QUESTIONS ONLY For training purposes, go directly to Set 1: Practice Questions, Answers & Explanations 1. Question An application is being created that will use Amazon EC2 instances to generate and store data. Another set of EC2 instances will then analyze and modify the data. Storage requirements will be significant and will continue to grow over time. The application architects require a storage solution. Which actions would meet these needs? 1: Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances 2: Store the data in an Amazon EFS filesystem. Mount the file system on the application instances 3: Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances 4: Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances 2. Question A company hosts a multiplayer game on AWS. The application uses Amazon EC2 instances in a single Availability Zone and users connect over Layer 4. Solutions Architect has been tasked with making the architecture highly available and also more cost-effective. How can the solutions architect best meet these requirements? (Select TWO) 1: Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically 2: Increase the number of instances and use smaller EC2 instance types 3: Configure a Network Load Balancer in front of the EC2 instances 4: Configure an Application Load Balancer in front of the EC2 instances 5: Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically 3. Question A company delivers content to subscribers distributed globally from an application running on AWS. The application uses a fleet of Amazon EC2 instance in a private subnet behind an Application Load Balancer (ALB). Due to an update in copyright restrictions, it is necessary to block access for specific countries. What is the EASIEST method to meet this requirement? 1: Modify the ALB security group to deny incoming traffic from blocked countries 2: Modify the security group for EC2 instances to deny incoming traffic from blocked countries 3: Use Amazon CloudFront to serve the application and deny access to blocked countries 4: Use a network ACL to block the IP address ranges associated with the specific countries 4. Question A company stores important data in an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion. Which action will accomplish this? 1: Enable Amazon S3 versioning 2: Enable Amazon S3 Intelligent-Tiering 3: Enable an Amazon S3 lifecycle policy 4: Enable Amazon S3 cross-Region replication 5. Question © 2020 Digital Cloud Training 7 A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company's applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm. Which service should the solutions architect use? 1: Amazon EFS 2: Amazon FSx 3: Amazon S3 4: AWS Storage Gateway 6. Question A website runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB) which serves as an origin for an Amazon CloudFront distribution. An AWS WAF is being used to protect against SQL injection attacks. A review of security logs revealed an external malicious IP that needs to be blocked from accessing the website. What should a solutions architect do to protect the application? 1: Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address 2: Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address 3: Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address 4: Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address 7. Question An ecommerce website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The application is stateless and elastic and scales from a minimum of 10 instances, up to a maximum of 200 instances. For at least 80% of the time at least 40 instances are required. Which solution should be used to minimize costs? 1: Purchase Reserved Instances to cover 200 instances 2: Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances 3: Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances 4: Purchase Reserved Instances to cover 40 instances. Use On-Demand and Spot Instances to cover the remaining instances 8. Question A solutions architect is creating a system that will run analytics on financial data for 4 hours a night, 5 days a week. The analysis is expected to run for the same duration and cannot be interrupted once it is started. The system will be required for a minimum of 1 year. Which type of Amazon EC2 instances should be used to reduce the cost of the system? 1: Spot Instances 2: On-Demand Instances 3: Standard Reserved Instances 4: Scheduled Reserved Instances 9. Question A solutions architect needs to backup some application log files from an online ecommerce store to Amazon S3. It is unknown how often the logs will be accessed or which logs will be accessed the most. The solutions architect must keep costs as low as possible by using the appropriate S3 storage class. © 2020 Digital Cloud Training 8 Which S3 storage class should be implemented to meet these requirements? 1: S3 Glacier 2: S3 Intelligent-Tiering 3: S3 Standard-Infrequent Access (S3 Standard-IA) 4: S3 One Zone-Infrequent Access (S3 One Zone-IA) 10. Question A solutions architect is designing a new service that will use an Amazon API Gateway API on the frontend. The service will need to persist data in a backend database using key-value requests. Initially, the data requirements will be around 1 GB and future growth is unknown. Requests can range from 0 to over 800 requests per second. Which combination of AWS services would meet these requirements? (Select TWO) 1: AWS Fargate 2: AWS Lambda 3: Amazon DynamoDB 4: Amazon EC2 Auto Scaling 5: Amazon RDS 11. Question A company's application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region. Which combination of actions should the solutions architect take to accomplish this? (Select TWO) 1: Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region 2: Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region 3: Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance 4: Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination 5: Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume 12. Question A solutions architect is creating a document submission application for a school. The application will use an Amazon S3 bucket for storage. The solution must prevent accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to upload and modify the documents. Which combination of actions should be taken to meet these requirements? (Select TWO) 1: Set read-only permissions on the bucket 2: Enable versioning on the bucket 3: Attach an IAM policy to the bucket 4: Enable MFA Delete on the bucket 5: Encrypt the bucket using AWS SSE-S3 13. Question A solutions architect is designing an application on AWS. The compute layer will run in parallel across EC2 instances. The compute layer should scale based on the number of jobs to be processed. The compute layer is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored. Which design should the solutions architect use? © 2020 Digital Cloud Training 9 1: Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage 2: Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage 3: Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue 4: Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic 14. Question A team are planning to run analytics jobs on log files each day and require a storage solution. The size and number of logs is unknown and data will persist for 24 hours only. What is the MOST cost-effective solution? 1: Amazon S3 Glacier Deep Archive 2: Amazon S3 Standard 3: Amazon S3 Intelligent-Tiering 4: Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) 15. Question A company runs a web application that serves weather updates. The application runs on a fleet of Amazon EC2 instances in a Multi-AZ Auto scaling group behind an Application Load Balancer (ALB). The instances store data in an Amazon Aurora database. A solutions architect needs to make the application more resilient to sporadic increases in request rates. Which architecture should the solutions architect implement? (Select TWO) 1: Add and AWS WAF in front of the ALB 2: Add Amazon Aurora Replicas 3: Add an AWS Transit Gateway to the Availability Zones 4: Add an AWS Global Accelerator endpoint 5: Add an Amazon CloudFront distribution in front of the ALB 16. Question An Amazon VPC contains several Amazon EC2 instances. The instances need to make API calls to Amazon DynamoDB. A solutions architect needs to ensure that the API calls do not traverse the internet. How can this be accomplished? (Select TWO) 1: Create a route table entry for the endpoint 2: Create a gateway endpoint for DynamoDB 3: Create a new DynamoDB table that uses the endpoint 4: Create an ENI for the endpoint in each of the subnets of the VPC 5: Create a VPC peering connection between the VPC and DynamoDB 17. Question A solutions architect is designing the infrastructure to run an application on Amazon EC2 instances. The application requires high availability and must dynamically scale based on demand to be cost efficient. © 2020 Digital Cloud Training 10 What should the solutions architect do to meet these requirements? 1: Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions 2: Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions 3: Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones 4: Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones 18. Question A retail company with many stores and warehouses is implementing IoT sensors to gather monitoring data from devices in each location. The data will be sent to AWS in real time. A solutions architect must provide a solution for ensuring events are received in order for each device and ensure that data is saved for future processing. Which solution would be MOST efficient? 1: Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3 2: Use Amazon Kinesis Data Streams for real-time events with a shard for each device. Use Amazon Kinesis Data Firehose to save data to Amazon EBS 3: Use an Amazon SQS FIFO queue for real-time events with one queue for each device. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS 4: Use an Amazon SQS standard queue for real-time events with one queue for each device. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3 19. Question An organization want to share regular updates about their charitable work using static webpages. The pages are expected to generate a large amount of views from around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution. Which action should the solutions architect take to accomplish this? 1: Generate presigned URLs for the files 2: Use cross-Region replication to all Regions 3: Use the geoproximity feature of Amazon Route 53 4: Use Amazon CloudFront with the S3 bucket as its origin 20. Question An insurance company has a web application that serves users in the United Kingdom and Australia. The application includes a database tier using a MySQL database hosted in eu-west-2. The web tier runs from eu- west-2 and ap-southeast-2. Amazon Route 53 geoproximity routing is used to direct users to the closest web tier. It has been noted that Australian users receive slow response times to queries. Which changes should be made to the database tier to improve performance? 1: Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in the Australian Region 2: Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions 3: Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance 4: Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in ap-southeast-2 © 2020 Digital Cloud Training 11 21. Question A web application runs in public and private subnets. The application architecture consists of a web tier and database tier running on Amazon EC2 instances. Both tiers run in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO) 1: Create new public and private subnets in the same AZ for high availability 2: Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs 3: Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB) 4: Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ 5: Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment 22. Question An application running on an Amazon ECS container instance using the EC2 launch type needs permissions to write data to Amazon DynamoDB. How can you assign these permissions only to the specific ECS task that is running the application? 1: Create an IAM policy with permissions to DynamoDB and attach it to the container instance 2: Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter 3: Use a security group to allow outbound connections to DynamoDB and assign it to the container instance 4: Modify the AmazonECSTaskExecutionRolePolicy policy to add permissions for DynamoDB 23. Question An organization has a large amount of data on Windows (SMB) file shares in their on-premises data center. The organization would like to move data into Amazon S3. They would like to automate the migration of data over their AWS Direct Connect link. Which AWS service can assist them? 1: AWS Database Migration Service (DMS) 2: AWS CloudFormation 3: AWS Snowball 4: AWS DataSync 24. Question The database tier of a web application is running on a Windows server on-premises. The database is a Microsoft SQL Server database. The application owner would like to migrate the database to an Amazon RDS instance. How can the migration be executed with minimal administrative effort and downtime? 1: Use the AWS Server Migration Service (SMS) to migrate the server to Amazon EC2.Use AWS Database Migration Service (DMS) to migrate the database to RDS 2: Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS 3: Use AWS DataSync to migrate the data from the database to Amazon S3. Use AWS Database Migration Service (DMS) to migrate the database to RDS 4: Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS. Use the Schema Conversion Tool (SCT) to enable conversion from Microsoft SQL Server to Amazon RDS 25. Question © 2020 Digital Cloud Training 12 A new application will run across multiple Amazon ECS tasks. Front-end application logic will process data and then pass that data to a back-end ECS task to perform further processing and write the data to a datastore. The Architect would like to reduce-interdependencies so failures do no impact other components. Which solution should the Architect use? 1: Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream 2: Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3 3: Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue 4: Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages 26. Question An application receives images uploaded by customers and stores them on Amazon S3. An AWS Lambda function then processes the images to add graphical elements. The processed images need to be available for users to download for 30 days, after which time they can be deleted. Processed images can be easily recreated from original images. The Original images need to be immediately available for 30 days and be accessible within 24 hours for another 90 days. Which combination of Amazon S3 storage classes is most cost-effective for the original and processed images? (Select TWO) 1: Store the original images in STANDARD for 30 days, transition to GLACIER for 90 days, then expire the data 2: Store the original images in STANDARD_IA for 30 days and then transition to DEEP_ARCHIVE 3: Store the processed images in ONEZONE_IA and then expire the data after 30 days 4: Store the processed images in STANDARD and then transition to GLACIER after 30 days 5: Store the original images in STANDARD for 30 days, transition to DEEP_ARCHIVE for 90 days, then expire the data 27. Question Amazon EC2 instances in a development environment run between 9am and 5pm Monday-Friday. Production instances run 24/7. Which pricing models should be used? (Select TWO) 1: Use Spot instances for the development environment 2: Use Reserved instances for the development environment 3: Use scheduled reserved instances for the development environment 4: Use Reserved instances for the production environment 5: Use On-Demand instances for the production environment 28. Question An application running on Amazon EC2 needs to asynchronously invoke an AWS Lambda function to perform data processing. The services should be decoupled. Which service can be used to decouple the compute services? 1: Amazon SQS 2: Amazon SNS 3: Amazon MQ 4: AWS Step Functions © 2020 Digital Cloud Training 13 29. Question A manual script that runs a few times a week and completes within 10 minutes needs to be replaced with an automated solution. Which of the following options should an Architect use? 1: Use a cron job on an Amazon EC2 instance 2: Use AWS Batch 3: Use AWS Lambda 4: Use AWS CloudFormation 30. Question A company wishes to restrict access to their Amazon DynamoDB table to specific, private source IP addresses from their VPC. What should be done to secure access to the table? 1: Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI) 2: Create a gateway VPC endpoint and add an entry to the route table 3: Create the Amazon DynamoDB table in the VPC 4: Create an AWS VPN connection to the Amazon DynamoDB endpoint 31. Question An AWS Organization has an OU with multiple member accounts in it. The company needs to restrict the ability to launch only specific Amazon EC2 instance types. How can this policy be applied across the accounts with the least effort? 1: Create an SCP with an allow rule that allows launching the specific instance types 2: Create an SCP with a deny rule that denies all but the specific instance types 3: Create an IAM policy to deny launching all but the specific instance types 4: Use AWS Resource Access Manager to control which launch types can be used 32. Question A new relational database is being deployed on AWS. The performance requirements are unknown. Which database service does not require you to make capacity decisions upfront? 1: Amazon DynamoDB 2: Amazon Aurora Serverless 3: Amazon ElastiCache 4: Amazon RDS 33. Question An Amazon RDS Read Replica is being deployed in a separate region. The master database is not encrypted but all data in the new region must be encrypted. How can this be achieved? 1: Enable encryption using Key Management Service (KMS) when creating the cross-region Read Replica 2: Encrypt a snapshot from the master DB instance, create an encrypted cross-region Read Replica from the snapshot 3: Enabled encryption on the master DB instance, then create an encrypted cross-region Read Replica 4: Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica 34. Question A legacy tightly-coupled High Performance Computing (HPC) application will be migrated to AWS. Which network adapter type should be used? © 2020 Digital Cloud Training 14 1: Elastic Network Interface (ENI) 2: Elastic Network Adapter (ENA) 3: Elastic Fabric Adapter (EFA) 4: Elastic IP Address 35. Question A new application is to be published in multiple regions around the world. The Architect needs to ensure only 2 IP addresses need to be whitelisted. The solution should intelligently route traffic for lowest latency and provide fast regional failover. How can this be achieved? 1: Launch EC2 instances into multiple regions behind an NLB with a static IP address 2: Launch EC2 instances into multiple regions behind an ALB and use a Route 53 failover routing policy 3: Launch EC2 instances into multiple regions behind an NLB and use AWS Global Accelerator 4: Launch EC2 instances into multiple regions behind an ALB and use Amazon CloudFront with a pair of static IP addresses 36. Question A company is deploying a big data and analytics workload. The analytics will be run from a fleet of thousands of EC2 instances across multiple AZs. Data needs to be stored on a shared storage layer that can be mounted and accessed concurrently by all EC2 instances. Latency is not a concern however extremely high throughput is required. What storage layer would be most suitable for this requirement? 1: Amazon EFS in General Purpose mode 2: Amazon EFS in Max I/O mode 3: Amazon EBS PIOPS 4: Amazon S3 37. Question A Solutions Architect is designing a highly-scalable system to track records. Records must remain available for immediate download for three months, and then the records must be deleted. What’s the most appropriate decision for this use case? 1: Store the files on Amazon EBS, and create a lifecycle policy to remove the files after three months 2: Store the files on Amazon Glacier, and create a lifecycle policy to remove the files after three months 3: Store the files on Amazon S3, and create a lifecycle policy to remove the files after three months 4: Store the files on Amazon EFS, and create a lifecycle policy to remove the files after three months 38. Question You are a Solutions Architect at Digital Cloud Training. A large multi-national client has requested a design for a multi-region, multi-master database. The client has requested that the database be designed for fast, massively scaled applications for a global user base. The database should be a fully managed service including the replication. Which AWS service can deliver these requirements? 1: DynamoDB with Global Tables and Multi-Region Replication 2: EC2 instances with EBS replication 3: S3 with Cross Region Replication 4: RDS with Multi-AZ © 2020 Digital Cloud Training 15 39. Question Your company is starting to use AWS to host new web-based applications. A new two-tier application will be deployed that provides customers with access to data records. It is important that the application is highly responsive and retrieval times are optimized. You’re looking for a persistent data store that can provide the required performance. From the list below what AWS service would you recommend for this requirement? 1: RDS in a multi-AZ configuration 2: ElastiCache with the Redis engine 3: Kinesis Data Streams 4: ElastiCache with the Memcached engine 40. Question A Linux instance running in your VPC requires some configuration changes to be implemented locally and you need to run some commands. Which of the following can be used to securely access the instance? 1: SSL/TLS certificate 2: Public key 3: Key Pairs 4: EC2 password 41. Question A manufacturing company captures data from machines running at customer sites. Currently, thousands of machines send data every 5 minutes, and this is expected to grow to hundreds of thousands of machines in the near future. The data is logged with the intent to be analyzed in the future as needed. What is the SIMPLEST method to store this streaming data at scale? 1: Create an Amazon EC2 instance farm behind an ELB to store the data in Amazon EBS Cold HDD volumes 2: Create an Amazon SQS queue, and have the machines write to the queue 3: Create an Amazon Kinesis Firehose delivery stream to store the data in Amazon S3 4: Create an Auto Scaling Group of Amazon EC2 instances behind ELBs to write data into Amazon RDS 42. Question There is a temporary need to share some video files that are stored in a private S3 bucket. The consumers do not have AWS accounts and you need to ensure that only authorized consumers can access the files. What is the best way to enable this access? 1: Enable public read access for the S3 bucket 2: Use CloudFront to distribute the files using authorization hash tags 3: Generate a pre-signed URL and distribute it to the consumers 4: Configure an allow rule in the Security Group for the IP addresses of the consumers 43. Question A Solutions Architect needs to improve performance for a web application running on EC2 instances launched by an Auto Scaling group. The instances run behind an ELB Application Load Balancer. During heavy use periods the ASG doubles in size and analysis has shown that static content stored on the EC2 instances is being requested by users in a specific geographic location. How can the Solutions Architect reduce the need to scale and improve the application performance? 1: Store the contents on Amazon EFS instead of the EC2 root volume 2: Implement Amazon Redshift to create a repository of the content closer to the users © 2020 Digital Cloud Training 16 3: Create an Amazon CloudFront distribution for the site and redirect user traffic to the distribution 4: Re-deploy the application in a new VPC that is closer to the users making the requests 44. Question A company needs to store data for 5 years. The company will need to have immediate and highly available access to the data at any point in time but will not require frequent access. Which lifecycle action should be taken to meet the requirements while reducing costs? 1: Transition objects from Amazon S3 Standard to the GLACIER storage class 2: Transition objects to expire after 5 years 3: Transition objects from Amazon S3 Standard to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) 4: Transition objects from Amazon S3 Standard to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) 45. Question A retail organization is deploying a new application that will read and write data to a database. The company wants to deploy the application in three different AWS Regions in an active-active configuration. The databases need to replicate to keep information in sync. Which solution best meets these requirements? 1: AWS Database Migration Service with change data capture 2: Amazon DynamoDB with global tables 3: Amazon Athena with Amazon S3 cross-region replication 4: Amazon Aurora Global Database 46. Question You are a Solutions Architect at Digital Cloud Training. One of your clients runs an application that writes data to a DynamoDB table. The client has asked how they can implement a function that runs code in response to item level changes that take place in the DynamoDB table. What would you suggest to the client? 1: Enable server access logging and create an event source mapping between AWS Lambda and the S3 bucket to which the logs are written 2: Enable DynamoDB Streams and create an event source mapping between AWS Lambda and the relevant stream 3: Create a local secondary index that records item level changes and write some custom code that responds to updates to the index 4: Use Kinesis Data Streams and configure DynamoDB as a producer 47. Question A recent security audit uncovered some poor deployment and configuration practices within your VPC. You need to ensure that applications are deployed in secure configurations. How can this be achieved in the most operationally efficient manner? 1: Remove the ability for staff to deploy applications 2: Use CloudFormation with securely configured templates 3: Manually check all application configurations before deployment 4: Use AWS Inspector to apply secure configurations 48. Question © 2020 Digital Cloud Training 17 A Solutions Architect needs to transform data that is being uploaded into S3. The uploads happen sporadically and the transformation should be triggered by an event. The transformed data should then be loaded into a target data store. What services would be used to deliver this solution in the MOST cost-effective manner? (Select TWO) 1: Configure a CloudWatch alarm to send a notification to CloudFormation when data is uploaded 2: Configure S3 event notifications to trigger a Lambda function when data is uploaded and use the Lambda function to trigger the ETL job 3: Configure CloudFormation to provision a Kinesis data stream to transform the data and load it into S3 4: Use AWS Glue to extract, transform and load the data into the target data store 5: Configure CloudFormation to provision AWS Data Pipeline to transform the data 49. Question An application you manage uses Auto Scaling and a fleet of EC2 instances. You recently noticed that Auto Scaling is scaling the number of instances up and down multiple times in the same hour. You need to implement a remediation to reduce the amount of scaling events. The remediation must be cost-effective and preserve elasticity. What design changes would you implement? (Select TWO) 1: Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy 2: Modify the Auto Scaling group termination policy to terminate the newest instance first 3: Modify the Auto Scaling group termination policy to terminate the oldest instance first 4: Modify the Auto Scaling group cool-down timers 5: Modify the Auto Scaling policy to use scheduled scaling actions 50. Question An application runs on two EC2 instances in private subnets split between two AZs. The application needs to connect to a CRM SaaS application running on the Internet. The vendor of the SaaS application restricts authentication to a whitelist of source IP addresses and only 2 IP addresses can be configured per customer. What is the most appropriate and cost-effective solution to enable authentication to the SaaS application? 1: Use a Network Load Balancer and configure a static IP for each AZ 2: Use multiple Internet-facing Application Load Balancers with Elastic IP addresses 3: Configure redundant Internet Gateways and update the routing tables for each subnet 4: Configure a NAT Gateway for each AZ with an Elastic IP address 51. Question An application tier of a multi-tier web application currently hosts two web services on the same set of instances. The web services each listen for traffic on different ports. Which AWS service should a Solutions Architect use to route traffic to the service based on the incoming request path? 1: Amazon Route 53 2: Amazon CloudFront 3: Application Load Balancer (ALB) 4: Classic Load Balancer (CLB) 52. Question The data scientists in your company are looking for a service that can process and analyze real-time, streaming data. They would like to use standard SQL queries to query the streaming data. Which combination of AWS services would deliver these requirements? 1: DynamoDB and EMR © 2020 Digital Cloud Training 18 2: Kinesis Data Streams and Kinesis Data Analytics 3: ElastiCache and EMR 4: Kinesis Data Streams and Kinesis Firehose 53. Question An e-commerce application is hosted in AWS. The last time a new product was launched, the application experienced a performance issue due to an enormous spike in traffic. Management decided that capacity must be doubled this week after the product is launched. What is the MOST efficient way for management to ensure that capacity requirements are met? 1: Add a Step Scaling policy 2: Add a Simple Scaling policy 3: Add a Scheduled Scaling action 4: Add Amazon EC2 Spot instances 54. Question You need to configure an application to retain information about each user session and have decided to implement a layer within the application architecture to store this information. Which of the options below could be used? (Select TWO) 1: Sticky sessions on an Elastic Load Balancer (ELB) 2: A block storage service such as Elastic Block Store (EBS) 3: A workflow service such as Amazon Simple Workflow Service (SWF) 4: A relational data store such as Amazon RDS 5: A key/value store such as ElastiCache Redis 55. Question An application running on an external website is attempting to initiate a request to your company’s website using API calls to Amazon API Gateway. A problem has been reported in which the requests are failing with an error that includes the following text: “Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource” You have been asked to resolve the problem, what is the most likely solution? 1: The IAM policy does not allow access to the API 2: The ACL on the API needs to be updated 3: The request is not secured with SSL/TLS 4: Enable CORS on the APIs resources using the selected methods under the API Gateway 56. Question A solutions Architect is designing a new workload where an AWS Lambda function will access an Amazon DynamoDB table. What is the MOST secure means of granting the Lambda function access to the DynamoDB table? 1: Create an identity and access management (IAM) role with the necessary permissions to access the DynamoDB table, and assign the role to the Lambda function 2: Create a DynamoDB username and password and give them to the Developer to use in the Lambda function 3: Create an identity and access management (IAM) user and create access and secret keys for the user. Give the user the necessary permissions to access the DynamoDB table. Have the Developer use these keys to access the resources © 2020 Digital Cloud Training 19 4: Create an identity and access management (IAM) role allowing access from AWS Lambda and assign the role to the DynamoDB table 57. Question You are a Solutions Architect at a media company and you need to build an application stack that can receive customer comments from sporting events. The application is expected to receive significant load that could scale to millions of messages within a short space of time following high-profile matches. As you are unsure of the load required for the database layer what is the most cost-effective way to ensure that the messages are not dropped? 1: Use DynamoDB and provision enough write capacity to handle the highest expected load 2: Write the data to an S3 bucket, configure RDS to poll the bucket for new messages 3: Create an SQS queue and modify the application to write to the SQS queue. Launch another application instance the polls the queue and writes messages to the database 4: Use RDS Auto Scaling for the database layer which will automatically scale as required 58. Question An organization in the health industry needs to create an application that will transmit protected health data to thousands of service consumers in different AWS accounts. The application servers run on EC2 instances in private VPC subnets. The routing for the application must be fault tolerant. What should be done to meet these requirements? 1: Create a virtual private gateway connection between each pair of service provider VPCs and service consumer VPCs 2: Create a proxy server in the service provider VPC to route requests from service consumers to the application servers 3: Create a VPC endpoint service and grant permissions to specific service consumers to create a connection 4: Create an internal Application Load Balancer in the service provider VPC and put application servers behind it 59. Question A Solutions Architect is developing an encryption solution. The solution requires that data keys are encrypted using envelope protection before they are written to disk. Which solution option can assist with this requirement? 1: API Gateway with STS 2: IAM Access Key 3: AWS Certificate Manager 4: AWS KMS API 60. Question A research company is developing a data lake solution in Amazon S3 to analyze huge datasets. The solution makes infrequent SQL queries only. In addition, the company wants to minimize infrastructure costs. Which AWS service should be used to meet these requirements? 1: Amazon Aurora 2: Amazon RDS for MySQL 3: Amazon Athena 4: Amazon Redshift Spectrum 61. Question © 2020 Digital Cloud Training 20 Your company shares some HR videos stored in an Amazon S3 bucket via CloudFront. You need to restrict access to the private content so users coming from specific IP addresses can access the videos and ensure direct access via the Amazon S3 bucket is not possible. How can this be achieved? 1: Configure CloudFront to require users to access the files using signed cookies, create an origin access identity (OAI) and instruct users to login with the OAI 2: Configure CloudFront to require users to access the files using a signed URL, create an origin access identity (OAI) and restrict access to the files in the Amazon S3 bucket to the OAI 3: Configure CloudFront to require users to access the files using signed cookies, and move the files to an encrypted EBS volume 4: Configure CloudFront to require users to access the files using a signed URL, and configure the S3 bucket as a website endpoint 62. Question The company you work for is currently transitioning their infrastructure and applications into the AWS cloud. You are planning to deploy an Elastic Load Balancer (ELB) that distributes traffic for a web application running on EC2 instances. You still have some application servers running on-premise and you would like to distribute application traffic across both your AWS and on-premises resources. How can this be achieved? 1: Provision a Direct Connect connection between your on-premises location and AWS and create a target group on an ALB to use IP based targets for both your EC2 instances and on-premises servers 2: Provision a Direct Connect connection between your on-premises location and AWS and create a target group on an ALB to use Instance ID based targets for both your EC2 instances and on-premises servers 3: Provision an IPSec VPN connection between your on-premises location and AWS and create a CLB that uses cross-zone load balancing to distributed traffic across EC2 instances and on-premises servers 4: This cannot be done, ELBs are an AWS service and can only distribute traffic within the AWS cloud 63. Question An application you are designing receives and processes files. The files are typically around 4GB in size and the application extracts metadata from the files which typically takes a few seconds for each file. The pattern of updates is highly dynamic with times of little activity and then multiple uploads within a short period of time. What architecture will address this workload the most cost efficiently? 1: Use a Kinesis data stream to store the file, and use Lambda for processing 2: Store the file in an EBS volume which can then be accessed by another EC2 instance for processing 3: Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a Lambda function to extract the metadata 4: Place the files in an SQS queue, and use a fleet of EC2 instances to extract the metadata 64. Question The website for a new application received around 50,000 requests each second and the company wants to use multiple applications to analyze the navigation patterns of the users on their website so they can personalize the user experience. What can a Solutions Architect use to collect page clicks for the website and process them sequentially for each user? 1: Amazon Kinesis Data Streams 2: Amazon SQS FIFO queue 3: AWS CloudTrail trail © 2020 Digital Cloud Training 21 4: Amazon SQS standard queue 65. Question You are building an application that will collect information about user behavior. The application will rapidly ingest large amounts of dynamic data and requires very low latency. The database must be scalable without incurring downtime. Which database would you recommend for this scenario? 1: RDS with MySQL 2: DynamoDB 3: RedShift 4: RDS with Microsoft SQL © 2020 Digital Cloud Training 22 SET 1: PRACTICE QUESTIONS, ANSWERS & EXPLANATIONS 1. Question An application is being created that will use Amazon EC2 instances to generate and store data. Another set of EC2 instances will then analyze and modify the data. Storage requirements will be significant and will continue to grow over time. The application architects require a storage solution. Which actions would meet these needs? 1: Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances 2: Store the data in an Amazon EFS filesystem. Mount the file system on the application instances 3: Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances 4: Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances Answer: 2 Explanation: Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon Elastic File System Corporate data center On-premises File system client Note: Linux only NFS v1 Availability Zone Availability Zone /efs-mnt /efs-mnt EC2 Instance EC2 Instance Amazon EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, providing a common data source for workloads and applications running on more than one instance or server. For this scenario, EFS is a great choice as it will provide a scalable file system that can be mounted by multiple EC2 instances and accessed simultaneously. CORRECT: "Store the data in an Amazon EFS filesystem. Mount the file system on the application instances" is the correct answer. © 2020 Digital Cloud Training 23 INCORRECT: "Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances" is incorrect. Though there is a new feature that allows (EBS multi-attach) that allows attaching multiple Nitro instances to a volume, this is not on the exam yet, and has some specific constraints. INCORRECT: "Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances" is incorrect as S3 Glacier is not a suitable storage location for live access to data, it is used for archival. INCORRECT: "Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances" is incorrect. There is no reason to store the data on-premises in a Storage Gateway, using EFS is a much better solution. References: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-efs/ 2. Question A company hosts a multiplayer game on AWS. The application uses Amazon EC2 instances in a single Availability Zone and users connect over Layer 4. Solutions Architect has been tasked with making the architecture highly available and also more cost-effective. How can the solutions architect best meet these requirements? (Select TWO) 1: Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically 2: Increase the number of instances and use smaller EC2 instance types 3: Configure a Network Load Balancer in front of the EC2 instances 4: Configure an Application Load Balancer in front of the EC2 instances 5: Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically Answer: 3, 5 Explanation: The solutions architect must enable high availability for the architecture and ensure it is cost-effective. To enable high availability an Amazon EC2 Auto Scaling group should be created to add and remove instances across multiple availability zones. In order to distribute the traffic to the instances the architecture should use a Network Load Balancer which operates at Layer 4. This architecture will also be cost-effective as the Auto Scaling group will ensure the right number of instances are running based on demand. CORRECT: "Configure a Network Load Balancer in front of the EC2 instances" is a correct answer. CORRECT: "Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically" is also a correct answer. INCORRECT: "Increase the number of instances and use smaller EC2 instance types" is incorrect as this is not the most cost-effective option. Auto Scaling should be used to maintain the right number of active instances. INCORRECT: "Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically" is incorrect as this is not highly available as it’s a single AZ. INCORRECT: "Configure an Application Load Balancer in front of the EC2 instances" is incorrect as an ALB operates at Layer 7 rather than Layer 4. References: https://docsaws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load- balancing/ © 2020 Digital Cloud Training 24 3. Question A company delivers content to subscribers distributed globally from an application running on AWS. The application uses a fleet of Amazon EC2 instance in a private subnet behind an Application Load Balancer (ALB). Due to an update in copyright restrictions, it is necessary to block access for specific countries. What is the EASIEST method to meet this requirement? 1: Modify the ALB security group to deny incoming traffic from blocked countries 2: Modify the security group for EC2 instances to deny incoming traffic from blocked countries 3: Use Amazon CloudFront to serve the application and deny access to blocked countries 4: Use a network ACL to block the IP address ranges associated with the specific countries Answer: 3 Explanation: When a user requests your content, CloudFront typically serves the requested content regardless of where the user is located. If you need to prevent users in specific countries from accessing your content, you can use the CloudFront geo restriction feature to do one of the following: Allow your users to access your content only if they're in one of the countries on a whitelist of approved countries. Prevent your users from accessing your content if they're in one of the countries on a blacklist of banned countries. For example, if a request comes from a country where, for copyright reasons, you are not authorized to distribute your content, you can use CloudFront geo restriction to block the request. This is the easiest and most effective way to implement a geographic restriction for the delivery of content. CORRECT: "Use Amazon CloudFront to serve the application and deny access to blocked countries" is the correct answer. INCORRECT: "Use a Network ACL to block the IP address ranges associated with the specific countries" is incorrect as this would be extremely difficult to manage. INCORRECT: "Modify the ALB security group to deny incoming traffic from blocked countries" is incorrect as security groups cannot block traffic by country. INCORRECT: "Modify the security group for EC2 instances to deny incoming traffic from blocked countries" is incorrect as security groups cannot block traffic by country. References: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content- delivery/amazon-cloudfront/ 4. Question A company stores important data in an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion. Which action will accomplish this? 1: Enable Amazon S3 versioning 2: Enable Amazon S3 Intelligent-Tiering 3: Enable an Amazon S3 lifecycle policy 4: Enable Amazon S3 cross-Region replication Answer: 1 © 2020 Digital Cloud Training 25 Explanation: Object versioning is a means of keeping multiple variants of an object in the same Amazon S3 bucket. Versioning provides the ability to recover from both unintended user actions and application failures. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. CORRECT: "Enable Amazon S3 versioning" is the correct answer. INCORRECT: "Enable Amazon S3 Intelligent-Tiering" is incorrect. This is a storage class that automatically moves data between frequent access and infrequent access classes based on usage patterns. INCORRECT: "Enable an Amazon S3 lifecycle policy" is incorrect. An S3 lifecycle policy is a set of rules that define actions that apply to groups of S3 objects such as transitioning objects to another storage class. INCORRECT: "Enable Amazon S3 cross-Region replication" is incorrect as this is used to copy objects to different regions. CRR relies on versioning which is the feature that is required for protecting against accidental deletion. References: https://d0.awsstatic.com/whitepapers/protecting-s3-against-object-deletion.pdf Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ 5. Question A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company's applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm. Which service should the solutions architect use? 1: Amazon EFS 2: Amazon FSx 3: Amazon S3 4: AWS Storage Gateway Answer: 2 Explanation: Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. Amazon FSx is built on Windows Server and provides a rich set of administrative features that include end-user file restore, user quotas, and Access Control Lists (ACLs). Additionally, Amazon FSX for Windows File Server supports Distributed File System Replication (DFSR) in both Single-AZ and Multi-AZ deployments as can be seen in the feature comparison table below. CORRECT: "Amazon FSx" is the correct answer. INCORRECT: "Amazon EFS" is incorrect as EFS only supports Linux systems. INCORRECT: "Amazon S3" is incorrect as this is not a suitable replacement for a Microsoft filesystem. INCORRECT: "AWS Storage Gateway" is incorrect as this service is primarily used for connecting on-premises storage to cloud storage. It consists of a software device installed on-premises and can be used with SMB © 2020 Digital Cloud Training 26 shares but it actually stores the data on S3. It is also used for migration. However, in this case the company need to replace the file server farm and Amazon FSx is the best choice for this job. References: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/high-availability-multiAZ.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-fsx/ 6. Question A website runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB) which serves as an origin for an Amazon CloudFront distribution. An AWS WAF is being used to protect against SQL injection attacks. A review of security logs revealed an external malicious IP that needs to be blocked from accessing the website. What should a solutions architect do to protect the application? 1: Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address 2: Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address 3: Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address 4: Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address Answer: 2 Explanation: A new version of the AWS Web Application Firewall was released in November 2019. With AWS WAF classic you create “IP match conditions”, whereas with AWS WAF (new version) you create “IP set match statements”. Look out for wording on the exam. The IP match condition / IP set match statement inspects the IP address of a web request's origin against a set of IP addresses and address ranges. Use this to allow or block web requests based on the IP addresses that the requests originate from. AWS WAF supports all IPv4 and IPv6 address ranges. An IP set can hold up to 10,000 IP addresses or IP address ranges to check. CORRECT: "Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address" is the correct answer. INCORRECT: "Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address" is incorrect as CloudFront does not sit within a subnet so network ACLs do not apply to it. INCORRECT: "Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address" is incorrect as the source IP addresses of the data in the EC2 instances’ subnets will be the ELB IP addresses. INCORRECT: "Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address." is incorrect as you cannot create deny rules with security groups. References: https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-ipset-match.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity- compliance/aws-waf-and-shield/ 7. Question © 2020 Digital Cloud Training 27 An ecommerce website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The application is stateless and elastic and scales from a minimum of 10 instances, up to a maximum of 200 instances. For at least 80% of the time at least 40 instances are required. Which solution should be used to minimize costs? 1: Purchase Reserved Instances to cover 200 instances 2: Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances 3: Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances 4: Purchase Reserved Instances to cover 40 instances. Use On-Demand and Spot Instances to cover the remaining instances Answer: 4 Explanation: In this case at least 40 instances are required for 80% of the time which means they are good candidates for reserved instances which can provide discounts of up to 72% over on-demand instances. For the remainder of instances on-demand and Spot instances should be used. Spot can be used as the application is stateless and this will minimize costs and on-demand can be used when Spot instances aren’t available or the price is not beneficial. CORRECT: "Purchase Reserved Instances to cover 40 instances. Use On-Demand and Spot Instances to cover the remaining instances" is the correct answer. INCORRECT: "Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances" is incorrect as on-demand instances will not minimize costs. For the instances that will be required at a minimum, reserved instances should be used. INCORRECT: "Purchase Reserved Instances to cover 200 instances" is incorrect as these extra instances above 40 instances are only used for less and 20% of the time. It would better to reserve 40 instances only. INCORRECT: "Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances" is incorrect as only 40 instances should be reserved as these are used 80% of the time. The remainder should be spot instances. References: https://aws.amazon.com/ec2/pricing/reserved-instances/ Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ 8. Question A solutions architect is creating a system that will run analytics on financial data for 4 hours a night, 5 days a week. The analysis is expected to run for the same duration and cannot be interrupted once it is started. The system will be required for a minimum of 1 year. Which type of Amazon EC2 instances should be used to reduce the cost of the system? 1: Spot Instances 2: On-Demand Instances 3: Standard Reserved Instances 4: Scheduled Reserved Instances Answer: 4 Explanation: Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. You reserve the capacity in advance, so that you know it is available when you need it. You pay for the time that the instances are scheduled, even if you do not use them. © 2020 Digital Cloud Training 28 Scheduled Instances are a good choice for workloads that do not run continuously, but do run on a regular schedule. For example, you can use Scheduled Instances for an application that runs during business hours or for batch processing that runs at the end of the week. CORRECT: "Scheduled Reserved Instances" is the correct answer. INCORRECT: "Standard Reserved Instances" is incorrect as the workload only runs for 4 hours a day this would be more expensive. INCORRECT: "On-Demand Instances" is incorrect as this would be much more expensive as there is no discount applied. INCORRECT: "Spot Instances" is incorrect as the workload cannot be interrupted once started. With Spot instances workloads can be terminated if the Spot price changes or capacity is required. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-scheduled-instances.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ 9. Question A solutions architect needs to backup some application log files from an online ecommerce store to Amazon S3. It is unknown how often the logs will be accessed or which logs will be accessed the most. The solutions architect must keep costs as low as possible by using the appropriate S3 storage class. Which S3 storage class should be implemented to meet these requirements? 1: S3 Glacier 2: S3 Intelligent-Tiering 3: S3 Standard-Infrequent Access (S3 Standard-IA) 4: S3 One Zone-Infrequent Access (S3 One Zone-IA) Answer: 2 Explanation: The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. This is an ideal use case for intelligent-tiering as the access patterns for the log files are not known. CORRECT: "S3 Intelligent-Tiering" is the correct answer. INCORRECT: "S3 Standard-Infrequent Access (S3 Standard-IA)" is incorrect as if the data is accessed often retrieval fees could become expensive. INCORRECT: "S3 One Zone-Infrequent Access (S3 One Zone-IA)" is incorrect as if the data is accessed often retrieval fees could become expensive. INCORRECT: "S3 Glacier" is incorrect as if the data is accessed often retrieval fees could become expensive. Glacier also requires more work in retrieving the data from the archive and quick access requirements can add further costs. References: https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ 10. Question © 2020 Digital Cloud Training 29 A solutions architect is designing a new service that will use an Amazon API Gateway API on the frontend. The service will need to persist data in a backend database using key-value requests. Initially, the data requirements will be around 1 GB and future growth is unknown. Requests can range from 0 to over 800 requests per second. Which combination of AWS services would meet these requirements? (Select TWO) 1: AWS Fargate 2: AWS Lambda 3: Amazon DynamoDB 4: Amazon EC2 Auto Scaling 5: Amazon RDS Answer: 2, 3 Explanation: In this case AWS Lambda can perform the computation and store the data in an Amazon DynamoDB table. Lambda can scale concurrent executions to meet demand easily and DynamoDB is built for key-value data storage requirements and is also serverless and easily scalable. This is therefore a cost effective solution for unpredictable workloads. CORRECT: "AWS Lambda" is a correct answer. CORRECT: "Amazon DynamoDB" is also a correct answer. INCORRECT: "AWS Fargate" is incorrect as containers run constantly and therefore incur costs even when no requests are being made. INCORRECT: "Amazon EC2 Auto Scaling" is incorrect as this uses EC2 instances which will incur costs even when no requests are being made. INCORRECT: "Amazon RDS" is incorrect as this is a relational database not a No-SQL database. It is therefore not suitable for key-value data storage requirements. References: https://aws.amazon.com/lambda/features/ https://aws.amazon.com/dynamodb/ Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon- dynamodb/ 11. Question A company's application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region. Which combination of actions should the solutions architect take to accomplish this? (Select TWO) 1: Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region 2: Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region 3: Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance 4: Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination 5: Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume Answer: 2, 4 Explanation: © 2020 Digital Cloud Training 30 You can copy an Amazon Machine Image (AMI) within or across AWS Regions using the AWS Management Console, the AWS Command Line Interface or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. Using the copied AMI the solutions architect would then be able to launch an instance from the same EBS volume in the second Region. Note: the AMIs are stored on Amazon S3, however you cannot view them in the S3 management console or work with them programmatically using the S3 API. CORRECT: "Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination" is a correct answer. CORRECT: "Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region" is also a correct answer. INCORRECT: "Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region" is incorrect. You cannot copy EBS volumes directly from EBS to Amazon S3. INCORRECT: "Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance" is incorrect. You cannot create an EBS volume directly from Amazon S3. INCORRECT: "Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume" is incorrect. You cannot create an EBS volume directly from Amazon S3. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/ 12. Question A solutions architect is creating a document submission application for a school. The application will use an Amazon S3 bucket for storage. The solution must prevent accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to upload and modify the documents. Which combination of actions should be taken to meet these requirements? (Select TWO) 1: Set read-only permissions on the bucket 2: Enable versioning on the bucket 3: Attach an IAM policy to the bucket 4: Enable MFA Delete on the bucket 5: Encrypt the bucket using AWS SSE-S3 Answer: 2, 4 Explanation: None of the options present a good solution for specifying permissions required to write and modify objects so that requirement needs to be taken care of separately. The other requirements are to prevent accidental deletion and the ensure that all versions of the document are available. The two solutions for these requirements are versioning and MFA delete. Versioning will retain a copy of each version of the document and multi-factor authentication delete (MFA delete) will prevent any accidental deletion as you need to supply a second factor when attempting a delete. CORRECT: "Enable versioning on the bucket" is a correct answer. CORRECT: "Enable MFA Delete on the bucket" is also a correct answer. INCORRECT: "Set read-only permissions on the bucket" is incorrect as this will also prevent any writing to the bucket which is not desired. INCORRECT: "Attach an IAM policy to the bucket" is incorrect as users need to modify documents which will also allow delete. Therefore, a method must be implemented to just control deletes. © 2020 Digital Cloud Training 31 INCORRECT: "Encrypt the bucket using AWS SSE-S3" is incorrect as encryption doesn’t stop you from deleting an object. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ 13. Question A solutions architect is designing an application on AWS. The compute layer will run in parallel across EC2 instances. The compute layer should scale based on the number of jobs to be processed. The compute layer is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored. Which design should the solutions architect use? 1: Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage 2: Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage 3: Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue 4: Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic Answer: 3 Explanation: In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue. To configure this scaling you can use the backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet's running capacity, which for an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance. Acceptable backlog per instance: To calculate your target value, first determine what your application can accept in terms of latency. Then, take the acceptable latency value and divide it by the average time that an EC2 instance takes to process a message. This solution will scale EC2 instances using Auto Scaling based on the number of jobs waiting in the SQS queue. CORRECT: "Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue" is the correct answer. INCORRECT: "Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage" is incorrect as scaling on network usage does not relate to the number of jobs waiting to be processed. © 2020 Digital Cloud Training 32 INCORRECT: "Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage" is incorrect. Amazon SNS is a notification service so it delivers notifications to subscribers. It does store data durably but is less suitable than SQS for this use case. Scaling on CPU usage is not the best solution as it does not relate to the number of jobs waiting to be processed. INCORRECT: "Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic" is incorrect. Amazon SNS is a notification service so it delivers notifications to subscribers. It does store data durably but is less suitable than SQS for this use case. Scaling on the number of notifications in SNS is not possible. References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto- scaling/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application- integration/amazon-sqs/ 14. Question A team are planning to run analytics jobs on log files each day and require a storage solution. The size and number of logs is unknown and data will persist for 24 hours only. What is the MOST cost-effective solution? 1: Amazon S3 Glacier Deep Archive 2: Amazon S3 Standard 3: Amazon S3 Intelligent-Tiering 4: Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) Answer: 2 Explanation: S3 standard is the best choice in this scenario for a short term storage solution. In this case the size and number of logs is unknown and it would be difficult to fully assess the access patterns at this stage. Therefore, using S3 standard is best as it is cost-effective, provides immediate access, and there are no retrieval fees or minimum capacity charge per object. CORRECT: "Amazon S3 Standard" is the correct answer. INCORRECT: "Amazon S3 Intelligent-Tiering" is incorrect as there is an additional fee for using this service and for a short-term requirement it may not be beneficial. INCORRECT: "Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)" is incorrect as this storage class has a minimum capacity charge per object (128 KB) and a per GB retrieval fee. INCORRECT: "Amazon S3 Glacier Deep Archive" is incorrect as this storage class is used for archiving data. There are retrieval fees and it take hours to retrieve data from an archive. References: https://aws.amazon.com/s3/storage-classes/ Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ 15. Question A company runs a web application that serves weather updates. The application runs on a fleet of Amazon EC2 instances in a Multi-AZ Auto scaling group behind an Application Load Balancer (ALB). The instances © 2020 Digital Cloud Training 33 store data in an Amazon Aurora database. A solutions architect needs to make the application more resilient to sporadic increases in request rates. Which architecture should the solutions architect implement? (Select TWO) 1: Add and AWS WAF in front of the ALB 2: Add Amazon Aurora Replicas 3: Add an AWS Transit Gateway to the Availability Zones 4: Add an AWS Global Accelerator endpoint 5: Add an Amazon CloudFront distribution in front of the ALB Answer: 2, 5 Explanation: The architecture is already highly resilient but may be subject to performance degradation if there are sudden increases in request rates. To resolve this situation Amazon Aurora Read Replicas can be used to serve read traffic which offloads requests from the main database. On the frontend an Amazon CloudFront distribution can be placed in front of the ALB and this will cache content for better performance and also offloads requests from the backend. CORRECT: "Add Amazon Aurora Replicas" is the correct answer. CORRECT: "Add an Amazon CloudFront distribution in front of the ALB" is the correct answer. INCORRECT: "Add and AWS WAF in front of the ALB" is incorrect. A web application firewall protects applications from malicious attacks. It does not improve performance. INCORRECT: "Add an AWS Transit Gateway to the Availability Zones" is incorrect as this is used to connect on- premises networks to VPCs. INCORRECT: "Add an AWS Global Accelerator endpoint" is incorrect as this service is used for directing users to different instances of the application in different regions based on latency. References: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-aurora/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content- delivery/amazon-cloudfront/ 16. Question An Amazon VPC contains several Amazon EC2 instances. The instances need to make API calls to Amazon DynamoDB. A solutions architect needs to ensure that the API calls do not traverse the internet. How can this be accomplished? (Select TWO) 1: Create a route table entry for the endpoint 2: Create a gateway endpoint for DynamoDB 3: Create a new DynamoDB table that uses the endpoint 4: Create an ENI for the endpoint in each of the subnets of the VPC 5: Create a VPC peering connection between the VPC and DynamoDB Answer: 1, 2 Explanation: Amazon DynamoDB and Amazon S3 support gateway endpoints, not interface endpoints. With a gateway endpoint you create the endpoint in the VPC, attach a policy allowing access to the service, and then specify the route table to create a route table entry in. © 2020 Digital Cloud Training 34 Default VPC VPC Public subnet EC2 Instance Private subnet EC2 Instance DynamoDB Gateway Endpoint Amazon DynamoDB Route Table Destination Target p l-6 ca 540 05( com.am az ona ws. a p -s outh eas t-2.s 3,5 4. 2 3 1. 2 4 8.0 /22 ,54.23 1. 2 5 2.0 /24 ,52.95.12 8. 0 /2 1) v pce -I D CORRECT: "Create a route table entry for the endpoint" is a correct answer. CORRECT: "Create a gateway endpoint for DynamoDB" is also a correct answer. INCORRECT: "Create a new DynamoDB table that uses the endpoint" is incorrect as it is not necessary to create a new DynamoDB table. INCORRECT: "Create an ENI for the endpoint in each of the subnets of the VPC" is incorrect as an ENI is used by an interface endpoint, not a gateway endpoint. INCORRECT: "Create a VPC peering connection between the VPC and DynamoDB" is incorrect as you cannot create a VPC peering connection between a VPC and a public AWS service as public services are outside of VPCs. References: https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content- delivery/amazon-vpc/ 17. Question A solutions architect is designing the infrastructure to run an application on Amazon EC2 instances. The application requires high availability and must dynamically scale based on demand to be cost efficient. What should the solutions architect do to meet these requirements? 1: Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions 2: Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions 3: Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones © 2020 Digital Cloud Training 35 4: Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones Answer: 3 Explanation: The Amazon EC2-based application must be highly available and elastically scalable. Auto Scaling can provide the elasticity by dynamically launching and terminating instances based on demand. This can take place across availability zones for high availability. Incoming connections can be distributed to the instances by using an Application Load Balancer (ALB). CORRECT: "Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones" is the correct answer. INCORRECT: "Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones" is incorrect as API gateway is not used for load balancing connections to Amazon EC2 instances. INCORRECT: "Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions" is incorrect as you cannot launch instances in multiple Regions from a single Auto Scaling group. INCORRECT: "Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions" is incorrect as you cannot launch instances in multiple Regions from a single Auto Scaling group. References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html https://aws.amazon.com/elasticloadbalancing/ Save time with our exam-specific cheat sheets: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto- scaling/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load- balancing/ 18. Question A retail company with many stores and warehouses is implementing IoT sensors to gather monitoring data from devices in each location. The data will be sent to AWS in real time. A solutions architect must provide a solution for ensuring events are received in order for each device and ensure that data is saved for future processing. Which solution would be MOST efficient? 1: Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3 2: Use Amazon Kinesis Data Streams for real-time events with a shard for each device. Use Amazon Kinesis Data Firehose to save data to Amazon EBS 3: Use an Amazon SQS FIFO queue for real-time events with one queue for each device. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS 4: Use an Amazon SQS standard queue for real-time events with one queue for each device. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3 Answer: 1 Explanation: © 2020 Digital Cloud Training 36 Amazon Kinesis Data Streams collect and process data in real time. A Kinesis data stream is a set of shards. Each shard has a sequence of data records. Each data record has a sequence number that is assigned by Kinesis Data Streams. A shard is a uniquely identified sequence of data records in a stream. A partition key is used to group data by shard within a stream. Kinesis Data Streams segregates the data records belonging to a stream into multiple shards. It uses the partition key that is associated with each data record to determine which shard a given data record belongs to. Destinations Stream Consumers (EC2) Amazon S3 Amazon DynamoDB Analytics Tools Amazon RedShift Shards Amazon EMR Pro

Use Quizgecko on...
Browser
Browser