Amazon AWS Exam Questions & Answers PDF

Document Details

EnergySavingViolin

Uploaded by EnergySavingViolin

Tags

Amazon AWS cloud computing data analysis online exam preparation

Summary

This document contains exam questions and answers related to data transfer and analysis. The questions cover topics such as using Amazon S3 Transfer Acceleration and Amazon Athena for efficient data analysis. It focuses on practical solutions with minimal operational overhead, suitable for professional cloud computing exam preparation.

Full Transcript

- Expert Verified, Online, Free.  Custom View Settings Topic 1 - Exam A Question #1 Topic 1 A company collects data for temperature,...

- Expert Verified, Online, Free.  Custom View Settings Topic 1 - Exam A Question #1 Topic 1 A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection. The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity. Which solution meets these requirements? A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket. B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket. C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross- Region Replication to copy objects to the destination S3 bucket. D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region. Correct Answer: A Community vote distribution A (93%) 7%   D2w Highly Voted  10 months, 2 weeks ago Selected Answer: A S3 Transfer Acceleration is the best solution cz it's faster , good for high speed, Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 buckets. upvoted 41 times   Blest012 1 month, 1 week ago Correct the S3 Transfer Acceleration service is the best for this scenario upvoted 1 times   BoboChow 10 months, 2 weeks ago I thought S3 Transfer Acceleration is based on Cross Region Repilication, I made a mistake. upvoted 1 times   dilopezat Highly Voted  4 months, 3 weeks ago Thank you ExamTopics!!! I am so happy, today 06/04/2023 I pass the exam with 793. upvoted 13 times   Nezar_Mohsen 4 months, 2 weeks ago is it enough to study the first 20 pages which are free? upvoted 3 times   ashu089 4 months, 2 weeks ago NOPE NOPE upvoted 1 times   Bmarodi Most Recent  6 days ago Selected Answer: A Amazon S3 Transfer Acceleration can speed up your content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. upvoted 1 times   Kavinsundar05 2 weeks, 3 days ago Can anyone please send me the pdf of this whole questions. Thanks in advance.It would be a great help. email- [email protected] upvoted 1 times   sandhyaejji 2 weeks, 5 days ago Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet. S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion and speeds that can affect transfers, and logically shortens the distance to S3 for remote applications. upvoted 2 times   TariqKipkemei 4 weeks, 1 day ago Selected Answer: A S3 Transfer Acceleration improves transfer performance by routing traffic through Amazon CloudFront’s globally distributed Edge Locations and over AWS backbone networks, and by using network protocol optimizations. upvoted 1 times   jksnu 1 month ago Correct answer is A that is S3 Transfer Acceleration considering the points like 1) Having high internet speed 2) Need to transfer GBs of data regularly from different cities in different continents 3) With minimum operational cost. It is basically used when you have very fast internet service and collecting data from all over the world in GB or TB on regular basis. It first transfer the data to nearest Edge location and then this data is transferred to the target S3 bucket over an optimized network path. For more information, please visit the AWS url: https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html upvoted 1 times   Guru4Cloud 1 month, 1 week ago Selected Answer: B Explanation: Option B suggests uploading the data from each site to an S3 bucket in the closest Region. This ensures that the data is transferred quickly over high-speed Internet connections. By using S3 Cross-Region Replication, the objects can be automatically copied to the destination S3 bucket, allowing for easy aggregation. Once the data is successfully replicated, it can be removed from the origin S3 bucket to minimize storage costs and reduce complexity. Option A is not the best choice because it only focuses on optimizing the upload to the destination S3 bucket by using S3 Transfer Acceleration and multipart uploads. However, it doesn't address the requirement of aggregating data from multiple sites. upvoted 2 times   james2033 1 month, 1 week ago Selected Answer: A See video content of Amazon S3 Transfer Acceleration at https://www.youtube.com/watch?v=HnUqH3hdz4I with case study of Video content from cameras to a concentrate center. upvoted 1 times   miki111 1 month, 1 week ago AAAA is the right Answer. upvoted 1 times   HaiChamKhong 1 month, 2 weeks ago My answer: A upvoted 1 times   Foxy2021 1 month, 3 weeks ago I passed the test last month, several questions were similar to the questions posted here. Advise: study all the questions(Yes, ALL THE QUESTIONS, 500+ questions). The test was very hard. upvoted 2 times   KAMERO 2 months ago Selected Answer: A With Amazon S3 Transfer Acceleration, you can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. upvoted 1 times   Gullashekar 2 months ago Selected Answer: A 3 Transfer Acceleration is the best solution cz it's faster , good for high speed, Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 buckets. upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: A To aggregate data from multiple global sites as quickly as possible in a single Amazon S3 bucket while minimizing operational complexity, the most suitable solution would be Option A: Turn on S3 Transfer Acceleration on the destination S3 bucket and use multipart uploads to directly upload site data to the destination S3 bucket. In summary, Option A provides the most efficient and operationally simple solution to aggregate data from multiple global sites quickly into a single Amazon S3 bucket. By leveraging S3 Transfer Acceleration and multipart uploads, the company can achieve rapid data ingestion while minimizing complexity. upvoted 2 times   Subh_fidelity 2 months, 1 week ago I believe it is very confusing practice. Should we rely on Answers given against questions or Most voted answers ? Exam topics should not give answers if most of the answers are wrong. upvoted 3 times   Rigsby 2 months, 2 weeks ago Passed this week with a score of 830. Style is identical to these questions - learn all the questions here and you will do well. upvoted 3 times Question #2 Topic 1 A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture. What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead? A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed. B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console. C. Use Amazon Athena directly with Amazon S3 to run the queries as needed. D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed. Correct Answer: C Community vote distribution C (100%)   airraid2010 Highly Voted  10 months, 2 weeks ago Answer: C https://docs.aws.amazon.com/athena/latest/ug/what-is.html Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. upvoted 40 times   BoboChow 10 months, 2 weeks ago I agree C is the answer upvoted 2 times   tt79 10 months, 2 weeks ago C is right. upvoted 1 times   PhucVuu Highly Voted  4 months, 3 weeks ago Selected Answer: C Keyword: - Queries will be simple and will run on-demand. - Minimal changes to the existing architecture. A: Incorrect - We have to do 2 step. load all content to Redshift and run SQL query (This is simple query so we can you Athena, for complex query we will apply Redshit) B: Incorrect - Our query will be run on-demand so we don't need to use CloudWatch Logs to store the logs. C: Correct - This is simple query we can apply Athena directly on S3 D: Incorrect - This take 2 step: use AWS Glue to catalog the logs and use Spark to run SQL query upvoted 15 times   Theocode Most Recent  2 weeks, 4 days ago Selected Answer: C A no-brainer, Athena can be used to query data directly from s3 upvoted 1 times   sandhyaejji 2 weeks, 5 days ago Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. upvoted 1 times   TariqKipkemei 4 weeks, 1 day ago Selected Answer: C Amazon Athena is a serverless, interactive analytics service used to query data in relational, nonrelational, object, and custom data sources running on S3. upvoted 1 times   Guru4Cloud 1 month, 1 week ago Selected Answer: C Explanation: Option C is the most suitable choice for this scenario. Amazon Athena is a serverless query service that allows you to analyze data directly from Amazon S3 using standard SQL queries. Since the log files are already stored in JSON format in an S3 bucket, there is no need for data transformation or loading into another service. Athena can directly query the JSON logs without the need for any additional infrastructure. upvoted 2 times   james2033 1 month, 1 week ago Selected Answer: C I remember question and answer in an easy way with case study https://www.youtube.com/watch?v=Dmw7HOOmiJQ upvoted 1 times   miki111 1 month, 1 week ago Best option is CCCC upvoted 1 times   animefan1 1 month, 3 weeks ago Selected Answer: C athena is great for querying data in s3 upvoted 1 times   oaidv 2 months ago Selected Answer: C C is right upvoted 1 times   KAMERO 2 months ago Selected Answer: C I agree C upvoted 1 times   Gullashekar 2 months ago Selected Answer: C https://docs.aws.amazon.com/athena/latest/ug/what-is.html upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: C To meet the requirements of analyzing log files stored in JSON format in an Amazon S3 bucket with minimal changes to the existing architecture and minimal operational overhead, the most suitable option would be Option C: Use Amazon Athena directly with Amazon S3 to run the queries as needed. Amazon Athena is a serverless interactive query service that allows you to analyze data directly from Amazon S3 using standard SQL queries. It eliminates the need for infrastructure provisioning or data loading, making it a low-overhead solution. Overall, Amazon Athena offers a straightforward and efficient solution for analyzing log files stored in JSON format, ensuring minimal operational overhead and compatibility with simple on-demand queries. upvoted 1 times   Bmarodi 2 months, 3 weeks ago Selected Answer: C C is answer. upvoted 1 times   cheese929 3 months, 1 week ago C is correct upvoted 1 times   rag300 4 months ago Selected Answer: C serverless to avoid operational overhead c is the answer upvoted 2 times   [Removed] 4 months ago It is difficult to use SQL to query JSON format files, which contradicts the simple query mentioned in the question and excludes all SQL options upvoted 2 times Question #3 Topic 1 A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations. Which solution meets these requirements with the LEAST amount of operational overhead? A. Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy. B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy. C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly. D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy. Correct Answer: A Community vote distribution A (94%) 6%   ude Highly Voted  10 months, 2 weeks ago Selected Answer: A aws:PrincipalOrgID Validates if the principal accessing the resource belongs to an account in your organization. https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-principals/ upvoted 39 times   BoboChow 10 months, 2 weeks ago the condition key aws:PrincipalOrgID can prevent the members who don't belong to your organization to access the resource upvoted 9 times   Naneyerocky Highly Voted  9 months, 3 weeks ago Selected Answer: A https://docs.aws.amazon.com/organizations/latest/userguide/orgs_permissions_overview.html Condition keys: AWS provides condition keys that you can query to provide more granular control over certain actions. The following condition keys are especially useful with AWS Organizations: aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to listing all the account IDs for all AWS accounts in an organization. Instead of listing all of the accounts that are members of an organization, you can specify the organization ID in the Condition element. aws:PrincipalOrgPaths – Use this condition key to match members of a specific organization root, an OU, or its children. The aws:PrincipalOrgPaths condition key returns true when the principal (root user, IAM user, or role) making the request is in the specified organization path. A path is a text representation of the structure of an AWS Organizations entity. upvoted 12 times   Sleepy_Lazy_Coder 2 weeks ago are we not choosing ou because the least overhead term was use? option B also seems correct upvoted 2 times   BlackMamba_4 17 hours, 5 minutes ago Exactly upvoted 1 times   Guru4Cloud Most Recent  1 month, 1 week ago Selected Answer: A This is the least operationally overhead solution because it does not require any additional infrastructure or configuration. AWS Organizations already tracks the organization ID of each account, so you can simply add the aws:PrincipalOrgID condition key to the S3 bucket policy and reference the organization ID. This will ensure that only users of accounts within the organization can access the S3 bucket upvoted 1 times   james2033 1 month, 1 week ago Selected Answer: A See video "Ensure identities and networks can only be used to access trusted resources" at https://youtu.be/cWVW0xAiWwc?t=677 at 11:17 use "aws:PrincipalOrgId": "o-fr75jjs531". upvoted 1 times   miki111 1 month, 1 week ago Option A MET THE REQUIREMENT upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: A Option A, which suggests adding the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy, is a valid solution to limit access to the S3 bucket to users within the organization in AWS Organizations. It can effectively achieve the desired access control. It restricts access to the S3 bucket based on the organization ID, ensuring that only users within the organization can access the bucket. This method is suitable if you want to restrict access at the organization level rather than individual departments or organizational units. The operational overhead for Option A is also relatively low since it involves adding a global condition key to the S3 bucket policy. However, it is important to note that the organization ID must be accurately configured in the bucket policy to ensure the desired access control is enforced. In summary, Option A is a valid solution with minimal operational overhead that can limit access to the S3 bucket to users within the organization using the aws PrincipalOrgID global condition key. upvoted 1 times   karloscetina007 2 months, 1 week ago A is the correct answer. upvoted 1 times   Musti35 4 months, 2 weeks ago You can now use the aws:PrincipalOrgID condition key in your resource-based policies to more easily restrict access to IAM principals from accounts in your AWS organization. For more information about this global condition key and policy examples using aws:PrincipalOrgID, read the IAM documentation. upvoted 1 times   PhucVuu 4 months, 3 weeks ago Selected Answer: A Keywords: - Company uses AWS Organizations - Limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations - LEAST amount of operational overhead A: Correct - We just add PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy B: Incorrect - We can limit access by this way but this will take more amount of operational overhead C: Incorrect - AWS CloudTrail only log API events, we can not prevent user access to S3 bucket. For update S3 bucket policy to make it work you should manually add each account -> this way will not be cover in case of new user is added to Organization. D: Incorrect - We can limit access by this way but this will take most amount of operational overhead upvoted 7 times   linux_admin 4 months, 3 weeks ago Selected Answer: A Option A proposes adding the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy. This would limit access to the S3 bucket to only users of accounts within the organization in AWS Organizations, as the aws PrincipalOrgID condition key can check if the request is coming from within the organization. upvoted 2 times   martin451 5 months ago B. Create an organizational unit (OU) for each department. Add the AWS: Principal Org Paths global condition key to the S3 bucket policy. This solution allows for the S3 bucket to only be accessed by users within the organization in AWS Organizations while minimizing operational overhead by organizing users into OUs and using a single global condition key in the bucket policy. Option A, adding the Principal ID global condition key, would require frequent updates to the policy as new users are added or removed from the organization. Option C, using CloudTrail to monitor events, would require manual updating of the policy based on the events. Option D, tagging each user, would also require manual tagging updates and may not be scalable for larger organizations with many users. upvoted 1 times   iamRohanKaushik 5 months, 1 week ago Selected Answer: A Answer is A. upvoted 1 times   buiducvu 6 months, 1 week ago Selected Answer: A A is correct upvoted 1 times   SilentMilli 7 months, 2 weeks ago Selected Answer: A This is the least operationally overhead solution because it requires only a single configuration change to the S3 bucket policy, which will allow access to the bucket for all users within the organization. The other options require ongoing management and maintenance. Option B requires the creation and maintenance of organizational units for each department. Option C requires monitoring of specific CloudTrail events and updates to the S3 bucket policy based on those events. Option D requires the creation and maintenance of tags for each user that needs access to the bucket. upvoted 1 times   Buruguduystunstugudunstuy 8 months, 1 week ago Selected Answer: A Answered by ChatGPT with an explanation. The correct solution that meets these requirements with the least amount of operational overhead is Option A: Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy. Option A involves adding the aws:PrincipalOrgID global condition key to the S3 bucket policy, which allows you to specify the organization ID of the accounts that you want to grant access to the bucket. By adding this condition to the policy, you can limit access to the bucket to only users of accounts within the organization. upvoted 4 times   Buruguduystunstugudunstuy 8 months, 1 week ago Option B involves creating organizational units (OUs) for each department and adding the aws:PrincipalOrgPaths global condition key to the S3 bucket policy. This option would require more operational overhead, as it involves creating and managing OUs for each department. Option C involves using AWS CloudTrail to monitor certain events and updating the S3 bucket policy accordingly. While this option could potentially work, it would require ongoing monitoring and updates to the policy, which could increase operational overhead. upvoted 2 times   Buruguduystunstugudunstuy 8 months, 1 week ago Option D involves tagging each user that needs access to the S3 bucket and adding the aws:PrincipalTag global condition key to the S3 bucket policy. This option would require you to tag each user, which could be time-consuming and could increase operational overhead. Overall, Option A is the most straightforward and least operationally complex solution for limiting access to the S3 bucket to only users of accounts within the organization. upvoted 1 times   psr83 8 months, 1 week ago Selected Answer: A use a new condition key, aws:PrincipalOrgID, in these policies to require all principals accessing the resource to be from an account (including the master account) in the organization. For example, let’s say you have an Amazon S3 bucket policy and you want to restrict access to only principals from AWS accounts inside of your organization. To accomplish this, you can define the aws:PrincipalOrgID condition and set the value to your organization ID in the bucket policy. Your organization ID is what sets the access control on the S3 bucket. Additionally, when you use this condition, policy permissions apply when you add new accounts to this organization without requiring an update to the policy. upvoted 2 times   NikaCZ 8 months, 1 week ago Selected Answer: A aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to listing all the account IDs for all AWS accounts in an organization. upvoted 1 times Question #4 Topic 1 An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet. Which solution will provide private network connectivity to Amazon S3? A. Create a gateway VPC endpoint to the S3 bucket. B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket. C. Create an instance profile on Amazon EC2 to allow S3 access. D. Create an Amazon API Gateway API with a private link to access the S3 endpoint. Correct Answer: A Community vote distribution A (100%)   D2w Highly Voted  10 months, 2 weeks ago Selected Answer: A VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet upvoted 24 times   PhucVuu Highly Voted  4 months, 3 weeks ago Selected Answer: A Keywords: - EC2 in VPC - EC2 instance needs to access the S3 bucket without connectivity to the internet A: Correct - Gateway VPC endpoint can connect to S3 bucket privately without additional cost B: Incorrect - You can set up interface VPC endpoint for CloudWatch Logs for private network from EC2 to CloudWatch. But from CloudWatch to S3 bucket: Log data can take up to 12 hours to become available for export and the requirement only need EC2 to S3 C: Incorrect - Create an instance profile just grant access but not help EC2 connect to S3 privately D: Incorrect - API Gateway like the proxy which receive network from out site and it forward request to AWS Lambda, Amazon EC2, Elastic Load Balancing products such as Application Load Balancers or Classic Load Balancers, Amazon DynamoDB, Amazon Kinesis, or any publicly available HTTPS-based endpoint. But not S3 upvoted 17 times   Bmarodi Most Recent  6 days ago Selected Answer: A With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. Ref. https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html upvoted 1 times   TariqKipkemei 4 weeks, 1 day ago Selected Answer: A A VPC endpoint enables customers to privately connect to supported AWS services and VPC endpoint services powered by AWS PrivateLink. https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html#:~:text=A-,VPC%20endpoint,- enables%20customers%20to upvoted 2 times   Guru4Cloud 1 month, 1 week ago Selected Answer: A The answer is A. Create a gateway VPC endpoint to the S3 bucket. A gateway VPC endpoint is a private way to connect to AWS services without using the internet. This is the best solution for the given scenario because it will allow the EC2 instance to access the S3 bucket without any internet connectivity upvoted 1 times   james2033 1 month, 1 week ago Selected Answer: A Keyword (1) EC2 in a VPC. (2)EC2 instance need access S3 bucket WITHOUT internet. Therefore, A is correct answer: Create a gateway VPC endpoint to S3 bucket. upvoted 1 times   miki111 1 month, 1 week ago Option A MET THE REQUIREMENT upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: A Here's why Option A is the correct choice: Gateway VPC Endpoint: A gateway VPC endpoint allows you to privately connect your VPC to supported AWS services. By creating a gateway VPC endpoint for S3, you can establish a private connection between your VPC and the S3 service without requiring internet connectivity. Private network connectivity: The gateway VPC endpoint for S3 enables your EC2 instance within the VPC to access the S3 bucket over the private network, ensuring secure and direct communication between the EC2 instance and S3. No internet connectivity required: Since the requirement is to access the S3 bucket without internet connectivity, the gateway VPC endpoint provides a private and direct connection to S3 without needing to route traffic through the internet. Minimal operational complexity: Setting up a gateway VPC endpoint is a straightforward process. It involves creating the endpoint and configuring the appropriate routing in the VPC. This solution minimizes operational complexity while providing the required private network connectivity. upvoted 1 times   Bmarodi 2 months, 3 weeks ago Selected Answer: A A is right answer. upvoted 1 times   cheese929 3 months, 1 week ago Selected Answer: A A is correct upvoted 1 times   channn 4 months, 3 weeks ago Selected Answer: A Option B) not provide private network connectivity to S3. Option C) not provide private network connectivity to S3. Option D) API Gateway with a private link provide private network connectivity between a VPC and an HTTP(S) endpoint, not S3. upvoted 2 times   linux_admin 4 months, 3 weeks ago Selected Answer: A Option A proposes creating a VPC endpoint for Amazon S3. A VPC endpoint enables private connectivity between the VPC and S3 without using an internet gateway or NAT device. This would provide the EC2 instance with private network connectivity to the S3 bucket. upvoted 2 times   dee_pandey 4 months, 3 weeks ago Could someone send me a pdf of this dump please? Thank you so much in advance! upvoted 1 times   Subhajeetpal 5 months ago Can anyone please send me the pdf of this whole dump... i can be very grateful. thanks in advance. email- [email protected] upvoted 1 times   tienhoboss 5 months ago Selected Answer: A A bạn ơi :) upvoted 1 times   iamRohanKaushik 5 months, 1 week ago Selected Answer: A Answer is A, but was confused with C, instance role will route through internet. upvoted 1 times   Folayinka 5 months, 4 weeks ago A VPC endpoint allows you to connect from the VPC to other AWS services outside of the VPC without the use of the internet. upvoted 1 times Question #5 Topic 1 A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time. What should a solutions architect propose to ensure users see all of their documents at once? A. Copy the data so both EBS volumes contain all the documents B. Configure the Application Load Balancer to direct a user to the server with the documents C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server Correct Answer: C Community vote distribution C (100%)   D2w Highly Voted  10 months, 2 weeks ago Selected Answer: C Concurrent or at the same time key word for EFS upvoted 27 times   mikey2000 Highly Voted  9 months, 1 week ago Ebs doesnt support cross az only reside in one Az but Efs does, that why it's c upvoted 19 times   pbpally 3 months, 2 weeks ago And just for clarification to others, you can have COPIES of the same EBS volume in one AZ and in another via EBS Snapshots, but don't confuse that with the idea of having some sort of global capability that has concurrent copying mechanisms. upvoted 4 times   TariqKipkemei Most Recent  4 weeks, 1 day ago Selected Answer: C Shared file storage = EFS upvoted 1 times   Guru4Cloud 1 month, 1 week ago Selected Answer: C The answer is C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS. The current architecture is using two separate EBS volumes, one for each EC2 instance. This means that each instance only has a subset of the documents. When a user refreshes the website, the Application Load Balancer will randomly direct them to one of the two instances. If the user's documents are not on the instance that they are directed to, they will not be able to see them. upvoted 1 times   james2033 1 month, 1 week ago Selected Answer: C Keyword "stores user-uploaded documents". Two EC2 instances behind Application Load Balancer. See https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html#efs-regional-ec2. In the diagram, Per Amazon EC2 in a different Availability zone, and Amazon Elastic File System support this case. Solution: Amazon Elastic File System, see https://aws.amazon.com/efs/. "Amazon EFS file system creation, mounting, and settings" https://www.youtube.com/watch?v=Aux37Nwe5nc. "Amazon EFS overview" https://www.youtube.com/watch?v=vAV4ASDnbN0. upvoted 1 times   miki111 1 month, 1 week ago Option C MET THE REQUIREMENT upvoted 1 times   FroZor 1 month, 2 weeks ago They could use Sicky sessions with EBS, if they don't want to use EFS upvoted 1 times   capino 1 month, 3 weeks ago EFS Reduces latency and all user can see all files at once upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: C To ensure users can see all their documents at once in the duplicated architecture with multiple EC2 instances and EBS volumes behind an Application Load Balancer, the most appropriate solution is Option C: Copy the data from both EBS volumes to Amazon EFS (Elastic File System) and modify the application to save new documents to Amazon EFS. In summary, Option C, which involves copying the data to Amazon EFS and modifying the application to use Amazon EFS for document storage, is the most appropriate solution to ensure users can see all their documents at once in the duplicated architecture. Amazon EFS provides scalability, availability, and shared access, allowing both EC2 instances to access and synchronize the documents seamlessly. upvoted 2 times   albertmunene 2 months, 1 week ago ffkfkffkfkf upvoted 1 times   Bmarodi 2 months, 3 weeks ago Selected Answer: C C is right answer. upvoted 1 times   Praveen_Ch 3 months ago Selected Answer: C C because the other options don't put all the data in one place. upvoted 1 times   smash_aws 3 months ago Option C is the best answer, option D is pretty vague. All other options are obviously wrong. upvoted 1 times   abhishek2021 3 months, 1 week ago The answer is B as it is aligned to min. b/w usage and also the time taken is 6-7 days which is about the same as transferring over the internet of 1G as per option C. upvoted 1 times   cheese929 3 months, 1 week ago Selected Answer: C C is correct upvoted 1 times   [Removed] 4 months ago If there is anyone who is willing to share his/her contributor access, then please write to [email protected] upvoted 2 times   IMTechguy 4 months ago Option A is not a good solution because copying data to both volumes would not ensure consistency of the data. Option B would require the Load Balancer to have knowledge of which documents are stored on which server, which would be difficult to maintain. Option C is a viable solution, but may require modifying the application to use Amazon EFS instead of EBS. Option D is a good solution because it would distribute the requests to both servers and return the correct document from the correct server. This can be achieved by configuring session stickiness on the Load Balancer so that each user's requests are directed to the same server for consistency. Therefore, the correct answer is D. upvoted 2 times Question #6 Topic 1 A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth. Which solution will meet these requirements? A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket. B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3. C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway. D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway. Correct Answer: C Community vote distribution B (84%) Other   Gatt Highly Voted  10 months, 1 week ago Selected Answer: B Let's analyse this: B. On a Snowball Edge device you can copy files with a speed of up to 100Gbps. 70TB will take around 5600 seconds, so very quickly, less than 2 hours. The downside is that it'll take between 4-6 working days to receive the device and then another 2-3 working days to send it back and for AWS to move the data onto S3 once it reaches them. Total time: 6-9 working days. Bandwidth used: 0. C. File Gateway uses the Internet, so maximum speed will be at most 1Gbps, so it'll take a minimum of 6.5 days and you use 70TB of Internet bandwidth. D. You can achieve speeds of up to 10Gbps with Direct Connect. Total time 15.5 hours and you will use 70TB of bandwidth. However, what's interesting is that the question does not specific what type of bandwidth? Direct Connect does not use your Internet bandwidth, as you will have a dedicate peer to peer connectivity between your on-prem and the AWS Cloud, so technically, you're not using your "public" bandwidth. The requirements are a bit too vague but I think that B is the most appropriate answer, although D might also be correct if the bandwidth usage refers strictly to your public connectivity. upvoted 53 times   LuckyAro 7 months, 2 weeks ago But it said "as soon as possible" It takes about 4-6 weeks to provision a direct connect. upvoted 11 times   Uncolored8034 4 months, 3 weeks ago This calculation is out of the scope. C is right because the company wants to use the LEAST POSSIBLE NETWORK BANDWITH. Therefore they don't want or can't use the snowball capabilities of having a such fast connection because it draws too much bandwith within their company. upvoted 7 times   [Removed] 2 months, 4 weeks ago NFS is using bandwidth within their company, so that logic does not apply. upvoted 2 times   tribagus6 2 months, 3 weeks ago yeah first company use NFS file to store the data right then the company want to move to S3. with endpoint we dont need public connectivity upvoted 1 times   darn 4 months ago you are out of scope upvoted 5 times   Help2023 6 months, 1 week ago D is a viable solution but to setup D it can take weeks or months and the question does say as soon as possible. upvoted 4 times   ShlomiM 6 months, 1 week ago Time Calc Clarification: Data: 70TB =70TB*8b/B=560Tb =560Tb*1000G/1T=560000Gb Speed: 100Gb/s Time=Data:Speed=56000Gb:100Gb/s=5600s Time=5600s:3600s/hour=~1.5 hours (in case always on max speed) upvoted 2 times   tuloveu Highly Voted  10 months, 2 weeks ago Selected Answer: B As using the least possible network bandwidth. upvoted 28 times   kwang312 Most Recent  4 days, 4 hours ago Selected Answer: B I think B is better answer upvoted 1 times   triz 3 weeks, 3 days ago Selected Answer: C Correct answer is C. The key words are NFS, S3 and low bandwidth. S3 File Gateway retains access through NFS protocol so that existing users don't have to change anything while being able to use S3 to store the files. The question also gives a clue that we only need to upload existing 70TB and no new data is added. This gives a clue that users only need to read data via NFS and don't need to write to S3 bucket. The existing data can be quickly uploaded (via endpoint) to S3 bucket by admin. upvoted 2 times   abthakur 3 weeks, 5 days ago C is correct answer, reason S3 file gateway will consume least amount of bandwidth and can transfer data as fast as possible. upvoted 2 times   Kits 4 weeks, 1 day ago Selected Answer: C To me, it appears C is correct as it is mentioned to use the least network bandwidth no network bandwidth, so B doesn't come into the picture. upvoted 2 times   Kits 4 weeks, 1 day ago To me, it appears C is correct as it is mentioned to use the least network bandwidth no network bandwidth, so B doesn't come into the picture. upvoted 2 times   TariqKipkemei 4 weeks, 1 day ago Selected Answer: C To me this statement means they want to make the transfer over the internet. Hence C makes more sense as an option. upvoted 2 times   martinfrad 1 month ago Selected Answer: B Keys: 1. The total storage is 70 TB and is no longer growing 2. Must migrate the video files as soon as possible 3. Using the least possible network bandwidth AWS Snowball can transfer up to 80TB per device without use network bandwidth and transfer data at up to 100 Gbps. upvoted 1 times   Guru4Cloud 1 month ago Selected Answer: B Option B is the correct solution to meet the requirements. The reasons are: AWS Snowball Edge provides a physical storage device to transfer large local datasets to Amazon S3. This avoids using network bandwidth for the data transfer. Snowball Edge can transfer up to 80TB per device, so a single Snowball Edge job can handle the 70TB dataset. The client transfers data to the Snowball Edge device on-premises, avoiding the need to copy the data over the network. When AWS receives the device back, the data is imported to the S3 bucket. This achieves fast data migration without using network bandwidth. Option A would consume large amounts of network bandwidth for the data transfer. Options C and D use S3 File Gateway, which still requires the data to be sent over the network to S3. This does not meet the goal of minimizing network bandwidth. So Option B with Snowball Edge is the right approach to migrate the large dataset quickly while using minimal network bandwidth. upvoted 2 times   Nazmul123 1 month ago Snowball Edge is more suitable because with the storage capacity of 80TB, it is more suited for this kind of large scale transfer whereas AWS S3 file gateway is more suitable for smaller, ongoing transfer of data where we need a hybrid cloud storage environment upvoted 2 times   Guru4Cloud 1 month, 1 week ago Selected Answer: B The answer is B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3. This solution is the most efficient way to migrate the video files to Amazon S3. The Snowball Edge device can transfer data at up to 100 Gbps, which is much faster than the company's current network bandwidth. The Snowball Edge device is also a secure way to transfer data, as it is encrypted at rest and in transit. upvoted 2 times   james2033 1 month, 1 week ago Selected Answer: B Keyword "70 TB and is no longer growing", choose AWS Snowball ( https://aws.amazon.com/snowball/ ) upvoted 2 times   Ranfer 1 month, 1 week ago Selected Answer: B B. No bandwidth, fast enough and has the least operational overhead upvoted 1 times   miki111 1 month, 1 week ago Option B MET THE REQUIREMENT upvoted 1 times   dchch 1 month, 2 weeks ago keyword: as soon as possible. C is correct upvoted 1 times   jaydesai8 1 month, 2 weeks ago Selected Answer: B import video files with the least network bandwidth. B is correct upvoted 1 times Question #7 Topic 1 A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability. Which solution meets these requirements? A. Persist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages. B. Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU metrics. C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages. D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SOS) subscriptions. Configure the consumer applications to process the messages from the queues. Correct Answer: A Community vote distribution D (80%) A (16%) 3%   rein_chau Highly Voted  10 months, 2 weeks ago Selected Answer: D D makes more sense to me. upvoted 36 times   SilentMilli 7 months, 2 weeks ago By default, an SQS queue can handle a maximum of 3,000 messages per second. However, you can request higher throughput by contacting AWS Support. AWS can increase the message throughput for your queue beyond the default limits in increments of 300 messages per second, up to a maximum of 10,000 messages per second. It's important to note that the maximum number of messages per second that a queue can handle is not the same as the maximum number of requests per second that the SQS API can handle. The SQS API is designed to handle a high volume of requests per second, so it can be used to send messages to your queue at a rate that exceeds the maximum message throughput of the queue. upvoted 6 times   Abdel42 7 months, 2 weeks ago The limit that you're mentioning apply to FIFO queues. Standard queues are unlimited in throughput (https://aws.amazon.com/sqs/features/). Do you think that the use case require FIFO queue ? upvoted 11 times   daizy 6 months, 3 weeks ago D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SQS) subscriptions. Configure the consumer applications to process the messages from the queues. This solution uses Amazon SNS and SQS to publish and subscribe to messages respectively, which decouples the system and enables scalability by allowing multiple consumer applications to process the messages in parallel. Additionally, using Amazon SQS with multiple subscriptions can provide increased resiliency by allowing multiple copies of the same message to be processed in parallel. upvoted 5 times   9014 8 months, 3 weeks ago of course, the answer is D upvoted 3 times   Bevemo Highly Voted  9 months, 3 weeks ago D. SNS Fan Out Pattern https://docs.aws.amazon.com/sns/latest/dg/sns-common-scenarios.html (A is wrong Kinesis Analysis does not 'persist' by itself.) upvoted 15 times   Syruis Most Recent  1 week ago Selected Answer: D "(Amazon SOS)" should be "(Amazon SQS)" it blow my mind :/ upvoted 1 times   BillyBlunts 1 week ago How are we to go into the test with such confusing answers....I agree as well that it should be D. However, I want to select the answer they say is right so I can pass the test. Everywhere I have looked including Quizlet who I think is pretty reliable, the answer is A. Would you guys say it is better to go with the answer they select, even though we majority agree it is a different answer? upvoted 1 times   Stevey 3 weeks, 2 days ago D. is the answer. The question states that there are dozens of other applications and microservices that consume these messages and that the volume of messages can vary drastically and increase suddenly. Therefore, you need a solution that can handle a high volume of messages, distribute them to multiple consumers, and scale quickly. SNS with SQS provides these capabilities. Publishing messages to an SNS topic with multiple SQS subscriptions is a common AWS pattern for achieving both decoupling and scalability in message-driven systems. SNS allows messages to be fanned out to multiple subscribers, which in this case would be SQS queues. Each consumer application could then process messages from its SQS queue at its own pace, providing scalability and ensuring that all messages are processed by all consumer applications. A. Amazon Kinesis Data Analytics is primarily used for real-time analysis of streaming data. It's not designed to distribute messages to multiple consumers. upvoted 1 times   sabs1981 3 weeks, 3 days ago Selected Answer: D Decoupling ingestion from subscription of messages. Should be done via queues and ideally SQS should be used in this case. upvoted 1 times   Shanky9916 4 weeks, 1 day ago The answer is D, because de-coupling is used only in SNS and SQS. upvoted 2 times   Guru4Cloud 1 month, 1 week ago Selected Answer: D The answer is D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SQS) subscriptions. Configure the consumer applications to process the messages from the queues. This solution is the most scalable and decoupled solution for the given scenario. Amazon SNS is a pub/sub messaging service that can be used to decouple applications. Amazon SQS is a fully managed message queuing service that can be used to store and process messages. The solution would work as follows: The ingestion application would publish the messages to an Amazon SNS topic. The Amazon SNS topic would have multiple Amazon SQS subscriptions. The consumer applications would subscribe to the Amazon SQS queues. The consumer applications would process the messages from the Amazon SQS queues. upvoted 1 times   Ranfer 1 month, 1 week ago Selected Answer: D Producer and consumer architecture works best with SQS (SNS) services upvoted 1 times   Kaab_B 1 month, 1 week ago Selected Answer: D The solution that meets the requirements of decoupling the solution and increasing scalability, considering the varying number of messages and sudden increases, is option D: Publish the messages to an Amazon Simple Notification Service (SNS) topic with multiple Amazon Simple Queue Service (SQS) subscription upvoted 1 times   miki111 1 month, 1 week ago Option D MET THE REQUIREMENT upvoted 1 times   RupeC 1 month, 2 weeks ago Selected Answer: D SNS + SQS Fan-out architecture. This is where messages from an SNS topic are fanned out to multiple SQS queues that subscribe to a topic. Thus the consumers can take their feeds from different queues. upvoted 1 times   jaydesai8 1 month, 2 weeks ago Selected Answer: D SQS have unlimited throughput and help easily decoupling the applications D makes more sense upvoted 1 times   gustvomonteiro 2 months ago D makes more sense! Keyword: Decoupling= SQS upvoted 2 times   cookieMr 2 months, 1 week ago Selected Answer: D Option A: It is more suitable for real-time analytics and processing of streaming data rather than decoupling and scaling message ingestion and consumption. Option B: It may help with scalability to some extent, but it doesn't provide decoupling. Option C: It is a valid option, but it lacks the decoupling aspect. In this approach, the consumer applications would still need to read directly from DynamoDB, creating tight coupling between the ingestion and consumption processes. Option D: It is the recommended solution for decoupling and scalability. The ingestion application can publish messages to an SNS topic, and multiple consumer apps can subscribe to the relevant SQS queues. SNS ensures that each message is delivered to all subscribed queues, allowing the consuming apps to independently process the messages at their own pace and scale horizontally as needed. This provides loose coupling, scalability, and fault tolerance, as the queues can handle message spikes and manage the consumption rate based on the consumer's processing capabilities. upvoted 3 times   Agustinr10 2 months, 1 week ago Selected Answer: D If its says decouple, then is SQS upvoted 1 times   karloscetina007 2 months, 1 week ago Selected Answer: A SNS and SQS still have an standard limit of tails under 3000 messages per sec, because SNS and SQS still have an standard limit of tails under 3000 messages per sec. It does not accomplishes with the requirement. Perharps it's a possible solution: Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages, but it requires that change the arch of this app, but this point is no matter on ther requirement. upvoted 1 times   Fresbie99 1 week, 5 days ago thats for FIFO queue. upvoted 1 times   Fresbie99 1 week, 5 days ago for standard queue there is nearly no limit to messages upvoted 1 times Question #8 Topic 1 A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability. How should a solutions architect design the architecture to meet these requirements? A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling. B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue. C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server. D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes. Correct Answer: C Community vote distribution B (94%) 3%   rein_chau Highly Voted  10 months, 2 weeks ago Selected Answer: B A - incorrect: Schedule scaling policy doesn't make sense. C, D - incorrect: Primary server should not be in same Auto Scaling group with compute nodes. B is correct. upvoted 52 times   Wilson_S 9 months, 2 weeks ago https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html upvoted 4 times   Sinaneos Highly Voted  10 months, 3 weeks ago Selected Answer: B The answer seems to be B for me: A: doesn't make sense to schedule auto-scaling C: Not sure how CloudTrail would be helpful in this case, at all. D: EventBridge is not really used for this purpose, wouldn't be very reliable upvoted 13 times   BillyBlunts Most Recent  20 hours, 4 minutes ago Can anyone tell me what answers we are to pick on the test. This site is driving me crazy with not having matching answers, especially with ones like this where the answer they have really doesn't seem like the right one. I am hesitant to pick the actual correct answer because the dumps have wrong answers. Any help is appreciated. upvoted 1 times   DavidArmas 1 week, 5 days ago Using CloudTrail as a "target for jobs" doesn't make sense, as CloudTrail is designed to audit and log API events in an AWS environment, not to manage jobs in a distributed application. upvoted 2 times   Teruteru 1 week, 6 days ago is saying the option C is the correct answer. But in the , the option B(97%) is most voted. So, do I still need to consider the C is the correct answer? or should I consider the most voted is the correct one? Sorry but I just confused. upvoted 2 times   niltriv98 1 week, 1 day ago I always consider the most voted answer to be correct and most of the time they explain as well why that option is correct. upvoted 1 times   Jenny9063 1 week, 6 days ago I am new to this and I am confused. why is the voted answer different from the correct answer? what is the accuracy of the correct answer? upvoted 1 times   VSiqueira 1 month ago aaa sadasdsadasdsadsadas upvoted 1 times   hakim1977 1 month ago Selected Answer: B why "C" is the answer ? CloudTrail has nothing to do here, that's the b the right answer upvoted 1 times   hakim1977 1 month ago why "C" is the answer ? CloudTrail has nothing to do here, that's the b the right answer upvoted 1 times   troubledev 1 month ago why "C" is the answer ? what cloud trial has to do here ? upvoted 1 times   Guru4Cloud 1 month, 1 week ago Selected Answer: B Based on the requirements stated, Option B is the most appropriate solution. It utilizes Amazon SQS for job destination and EC2 Auto Scaling based on the size of the queue to handle variable workloads while maximizing resiliency and scalability. upvoted 1 times   BHU_9ijn 1 month, 1 week ago Selected Answer: B CloudTrail is used for auditing, should not apply in work flow. Anyone knows why is C? upvoted 1 times   miki111 1 month, 1 week ago Option B MET THE REQUIREMENT upvoted 1 times   Charliesco 1 month, 2 weeks ago This is AWS auto scaling group concept with SQS hence B is correct upvoted 1 times   jaydesai8 1 month, 2 weeks ago Selected Answer: B B. SQS with EC2 ASG based on the queue size. This scales with the varying load. upvoted 1 times   animefan1 1 month, 3 weeks ago Selected Answer: B how is cloudtrail helpful in this case? asg + sqs makes more sense upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: B Configuring an Amazon SQS queue as a destination for the jobs, implementing compute nodes with EC2 instances managed in an Auto Scaling group, and configuring EC2 Auto Scaling based on the size of the queue is the most suitable solution. With this approach, the primary server can enqueue jobs into the SQS queue, and the compute nodes can dynamically scale based on the size of the queue. This ensures that the compute capacity adjusts according to the workload, maximizing resiliency and scalability. The SQS queue acts as a buffer, decoupling the primary server from the compute nodes and providing fault tolerance in case of failures or spikes in the workload. upvoted 3 times Question #9 Topic 1 A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days the files are rarely accessed. The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues. Which solution will meet these requirements? A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS. B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. C. Create an Amazon FSx for Windows File Server file system to extend the company's storage space. D. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days. Correct Answer: D Community vote distribution B (86%) 13%   Sinaneos Highly Voted  10 months, 3 weeks ago Answer directly points towards file gateway with lifecycles, https://docs.aws.amazon.com/filegateway/latest/files3/CreatingAnSMBFileShare.html D is wrong because utility function is vague and there is no need for flexible storage. upvoted 38 times   Udoyen 8 months, 4 weeks ago Yes it might be vague but how do we keep the low-latency access that only flexible can offer? upvoted 2 times   SuperDuperPooperScooper 1 week, 2 days ago Low-latency access is only required for the first 7 days, B maintains that fast access for 7 days and only then are the files sent to Glacier Archive upvoted 1 times   javitech83 Highly Voted  8 months, 3 weeks ago Selected Answer: B B answwer is correct. low latency is only needed for newer files. Additionally, File GW provides low latency access by caching frequently accessed files locally so answer is B upvoted 19 times   BillyBlunts Most Recent  1 week ago Can anyone tell me what the test is going to use as the correct answer? I don't want to go into the test and just answer the ones that majority voted for but really they don't have it as the right answer. BTW the discussions are awesome and helps you learn a lot from other peoples intellect. upvoted 3 times   Fresbie99 1 week, 5 days ago Acc to the question we need lifecycle policy and file gateway - Also the S3 flexible retrieval is not required here, Deep archive is the solution. hence B is correct upvoted 1 times   McLobster 3 weeks, 6 days ago Selected Answer: A according to documentation the minimum storage timeframe for an object inside S3 before being able to transition using lifecycle policy is 30 days , so those 7 days policies kinda seem wrong to me Transition actions – These actions define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after creating them, or archive objects to the S3 Glacier Flexible Retrieval storage class one year after creating them. For more information, see Using Amazon S3 storage classes. https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html I was thinking of option A using DataSync as a scheduled task? am i wrong here? https://aws.amazon.com/datasync/ upvoted 1 times   TariqKipkemei 4 weeks ago Selected Answer: B Increase the company's available storage space without losing low-latency access to the most recently accessed files = Amazon S3 File Gateway offers SMB or NFS-based access to data in Amazon S3 with local caching. Provide file lifecycle management = S3 Lifecycle policy. upvoted 1 times   Zoro_ 4 weeks, 1 day ago It says rarely accessed so Flexible retrieval can be a better option please clarify on this for deep retrieval, it takes about 12 hours (maybe less than too) upvoted 1 times   hakim1977 1 month ago Selected Answer: B B answwer is correct. upvoted 1 times   Guru4Cloud 1 month ago Selected Answer: B The most appropriate solution for the given requirements would be: B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. upvoted 1 times   premnick 1 month, 1 week ago Selected Answer: D The question says, after 7 days the file are "rarely accessed", which means there are still possibilities that these filed would be needed in short period of time. Glacier Flexible Retrieval would fit this requirement. upvoted 3 times   Kaab_B 1 month, 1 week ago Selected Answer: B The solution that will meet the requirements of increasing available storage space, maintaining low-latency access to recently accessed files, and providing file lifecycle management is option B: Create an Amazon S3 File Gateway and use an S3 Lifecycle policy to transition data to S3 Glacier Deep Archive after 7 days. upvoted 1 times   miki111 1 month, 1 week ago Option B MET THE REQUIREMENT upvoted 1 times   animefan1 1 month, 3 weeks ago Selected Answer: B file gateway is correct. upvoted 1 times   USalo 1 month, 4 weeks ago Selected Answer: B A. Isn't a good option because it doesn't address the cost-saving requirement B. Correct option, but there are no requirements specified about the access pattern of the data. So S3 AI might be preferable here. C. This one is not possible for an on-prem solution. D. Vierd option, I cannot google this tool. Or we are talking about AWS CLI and customization scripts... I won't go with it. upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: B Option B: Creating an Amazon S3 File Gateway is the recommended solution. It allows the company to extend their storage space by utilizing Amazon S3 as a scalable and durable storage service. The File Gateway provides low-latency access to the most recently accessed files by maintaining a local cache on-premises. The S3 Lifecycle policy can be configured to transition data to S3 Glacier Deep Archive after 7 days, which provides long-term archival storage at a lower cost. This approach addresses both the storage capacity issue and the need for file lifecycle management. upvoted 1 times   hypnozz 2 months, 2 weeks ago This question is tricky...Because, the B and D sounds good, however, you can not transition to S3 Glacier after 7 days, you have to wait at least 90 days for the D case, and 180 Days for the B...the A and C option are more suitable, but the thing is, the C option not mention about lifecycle policy and the A option there is not a thing to extend the storage...Then....I think the answer should be the A, because you are doing like a back up upvoted 2 times   Bmarodi 2 months, 3 weeks ago Selected Answer: B Option B is right answer. upvoted 1 times Question #10 Topic 1 A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received. Which solution will meet these requirements? A. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing. B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing. C. Use an API Gateway authorizer to block any requests while the application processes an order. D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing. Correct Answer: A Community vote distribution B (99%)   Sinaneos Highly Voted  10 months, 3 weeks ago Selected Answer: B B because FIFO is made for that specific purpose upvoted 48 times   rein_chau Highly Voted  10 months, 2 weeks ago Selected Answer: B Should be B because SQS FIFO queue guarantees message order. upvoted 23 times   PLN6302 Most Recent  2 days ago option B upvoted 1 times   zjcorpuz 1 month ago B is the correct answer SQS configured as FIFO will suffice the requirement upvoted 1 times   prabhjot 1 month ago Ans is B(Amazon SQS FIFO queue.) - check the video here - https://aws.amazon.com/sqs/ upvoted 1 times   Guru4Cloud 1 month ago Selected Answer: B Explanation: - Amazon API Gateway will be used to receive the orders from the web application. - Instead of directly processing the orders, the API Gateway will integrate with an Amazon SQS FIFO queue. - FIFO (First-In-First-Out) queues in Amazon SQS ensure that messages are processed in the order they are received. - By using a FIFO queue, the order processing is guaranteed to be sequential, ensuring that the first order received is processed before the next one. - An AWS Lambda function can be configured to be triggered by the SQS FIFO queue, processing the orders as they arrive upvoted 3 times   james2033 1 month, 1 week ago Selected Answer: B Keyword "orders are processed in the order that they are received", choose what has word "SQS", there are B and D. B is better than D, with keyword "FIFO" is exist. upvoted 1 times   ShreyasAlavala 1 month, 2 weeks ago Can someone share the PDF to [email protected] upvoted 1 times   Multiverse 1 month, 2 weeks ago Selected Answer: B B. Process in the order received. SNS does not provide FIFO capabilities upvoted 1 times   jaydesai8 1 month, 2 weeks ago Selected Answer: B B because FIFO is made for that specific purpose upvoted 1 times   animefan1 1 month, 3 weeks ago B is correct. SQS fifo as order is important here upvoted 1 times   gustvomonteiro 2 months ago FIFO for sure, the correct is B upvoted 1 times   KAMERO 2 months ago Selected Answer: B First-in First-out strictly order upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: B Option B: Using an API Gateway integration to send a message to an Amazon SQS FIFO queue is the recommended solution. FIFO queues in Amazon SQS guarantee the order of message delivery, ensuring that orders are processed in the order they are received. By configuring the SQS FIFO queue to invoke an AWS Lambda function for processing, the application can process the orders sequentially while maintaining the order in which they were received. upvoted 4 times   karloscetina007 2 months, 1 week ago Selected Answer: B B: because FIFO is made for that specific purpose. It's important takes in consideration that the orders must be processed in the order received. upvoted 1 times   Bmarodi 2 months, 3 weeks ago Selected Answer: B Option B meets the requiremets. upvoted 1 times   HarryDinh209 2 months, 3 weeks ago B is the correct answer, just remember FIFO for ordered messages upvoted 1 times Question #11 Topic 1 A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management. What should a solutions architect do to accomplish this goal? A. Use AWS Secrets Manager. Turn on automatic rotation. B. Use AWS Systems Manager Parameter Store. Turn on automatic rotation. C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket. D. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume. Correct Answer: B Community vote distribution A (96%) 3%   Sinaneos Highly Voted  10 months, 3 weeks ago Selected Answer: A B is wrong because parameter store does not support auto rotation, unless the customer writes it themselves, A is the answer. upvoted 64 times   17Master 10 months ago READ!!! AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources. This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. https://aws.amazon.com/cn/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/ y https://aws.amazon.com/secrets-manager/?nc1=h_ls upvoted 16 times   HarishArul 3 months ago Read this - https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_parameterstore.html It says SSM Parameter store cant rotate automatically. upvoted 2 times   kewl 8 months, 3 weeks ago correct. see link https://tutorialsdojo.com/aws-secrets-manager-vs-systems-manager-parameter-store/ for differences between SSM Parameter Store and AWS Secrets Manager upvoted 13 times   mrbottomwood 8 months, 2 weeks ago That was a fantastic link. This part of their site "comparison of AWS services" is superb. Thanks. upvoted 5 times   iCcma 10 months, 1 week ago ty bro, I was confused about that and you just mentioned the "key" phrase, B doesn't support autorotation upvoted 2 times   leeyoung Highly Voted  7 months, 4 weeks ago Admin is trying to fail everybody in the exam. upvoted 42 times   perception 3 months, 4 weeks ago He wants you to read discussion part as well for better understanding upvoted 2 times   acuaws 4 months, 3 weeks ago RIGHT? I found a bunch of "correct" answers on here are not really correct, but they're not corrected? hhmmmmm upvoted 1 times   PLN6302 Most Recent  2 days ago Option A upvoted 1 times   kapalulz 2 weeks, 5 days ago Selected Answer: A parameter store does not support auto rotation and AWS Secrets Manager does. upvoted 1 times   TariqKipkemei 4 weeks ago Selected Answer: A Minimize the operational overhead of credential management = AWS Secrets Manager upvoted 1 times   hakim1977 1 month ago Selected Answer: A The answer is A without hesitation, in the AWS console: Secret manager => store a ne secret => choose the "Credentials for Amazon RDS database" option https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-db.html upvoted 1 times   oguzbeliren 1 month ago Parameter store doesn't support password rotation, but secret manger does. upvoted 1 times   Guru4Cloud 1 month ago Selected Answer: A Here is the explanation: AWS Secrets Manager is a service that provides a secure and convenient way to store, manage, and rotate secrets. Secrets Manager can be used to store database credentials, SSH keys, and other sensitive information. AWS Secrets Manager also supports automatic rotation, which can help to minimize the operational overhead of credential management. When automatic rotation is enabled, Secrets Manager will automatically generate new secrets and rotate them on a regular schedule upvoted 1 times   james2033 1 month, 1 week ago Selected Answer: A See https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html#:~:text=rotate%20database%20credentials upvoted 1 times   miki111 1 month, 1 week ago Option A MET THE REQUIREMENT upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: A Option A: Using AWS Secrets Manager and enabling automatic rotation is the recommended solution for minimizing the operational overhead of credential management. AWS Secrets Manager provides a secure and centralized service for storing and managing secrets, such as database credentials. By leveraging Secrets Manager, the application can retrieve the database credentials programmatically at runtime, eliminating the need to store them locally in a file. Enabling automatic rotation ensures that the database credentials are regularly rotated without manual intervention, enhancing security and compliance. upvoted 5 times   TienHuynh 2 months, 1 week ago Selected Answer: A A is a right choice upvoted 1 times   hypnozz 2 months, 2 weeks ago A is the right choice...Because, in Both you can force to rotate or delete de secrets, but, in secrets manager you can use a lambda to generate automatic secrets, also, it´s integrated with RDS, like Aurora...and it´s mostly used for that propouse. You can use parameter store, but you have to update the parameters by yourself upvoted 2 times   Bmarodi 2 months, 3 weeks ago Selected Answer: A Option A meets this goal. upvoted 1 times   beginnercloud 3 months, 1 week ago Selected Answer: A A is correct upvoted 1 times   cheese929 3 months, 1 week ago Selected Answer: A A is the most logial answer. upvoted 1 times   cheese929 3 months, 2 weeks ago A is correct upvoted 1 times Question #12 Topic 1 A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53. What should a solutions architect do to meet these requirements? A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution. B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution. C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application. D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application. Correct Answer: C Community vote distribution A (77%) C (23%)   Kartikey140 Highly Voted  9 months, 1 week ago Answer is A Explanation - AWS Global Accelerator vs CloudFront They both use the AWS global network and its edge locations around the world Both services integrate with AWS Shield for DDoS protection. CloudFront Improves performance for both cacheable content (such as images and videos) Dynamic content (such as API acceleration and dynamic site delivery) Content is served at the edge Global Accelerator Improves performance for a wide range of applications over TCP or UDP Proxying packets at the edge to applications running in one or more AWS Regions. Good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP Good for HTTP use cases that require static IP addresses Good for HTTP use cases that required deterministic, fast regional failover upvoted 63 times   daizy 6 months, 3 weeks ago By creating a CloudFront distribution that has both the S3 bucket and the ALB as origins, the company can reduce latency for both the static and dynamic data. The CloudFront distribution acts as a content delivery network (CDN), caching the data closer to the users and reducing the latency. The company can then configure Route 53 to route traffic to the CloudFront distribution, providing improved performance for the web application. upvoted 4 times   kanweng Highly Voted  9 months, 2 weeks ago Selected Answer: A Q: How is AWS Global Accelerator different from Amazon CloudFront? A: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection. upvoted 18 times   georgitoan Most Recent  3 weeks, 1 day ago What is the correct answer for the exam? upvoted 1 times   BillyBlunts 1 week ago I have been asking this on many questions because it is confusing that majority of people are disagreeing with what this dump and the 3 other dumps have as the answer. Yet the arguments for why they are wrong are very good and some prove right...but f'in aye...what is the answer we need to pick for the test. upvoted 1 times   BillyBlunts 1 week ago I think I am just going with the answers provided by exam topics....that is probably the safer bet. upvoted 1 times   Lorenzo1 3 weeks, 3 days ago Selected Answer: A The answer A fulfills the requirements, so I would choose A. The answer C may also seem to make sense though. upvoted 1 times   gurmit 3 weeks, 4 days ago A. https://stackoverflow.com/questions/71064028/aws-cloudfront-in-front-of-s3-and-alb upvoted 1 times   TariqKipkemei 4 weeks ago Selected Answer: A Improve performance and reduce latency for the static data and dynamic data = Amazon CloudFront upvoted 1 times   Lx016 1 week, 5 days ago And? All four answers are about CloudFront, to find the correct answer need to find correct origin(s) to distribute which are S3 and ALB upvoted 1 times   hakim1977 1 month ago Selected Answer: A Answer is A. Global Accelerator is a good fit for non-HTTP use cases. upvoted 1 times   frkael 1 month, 1 week ago Keywords: Static and Dynamic data/ S3 and ALB. So Cloudfront is for S3 and Global Acelarator is for ALB upvoted 2 times   miki111 1 month, 1 week ago Option A MET THE REQUIREMENT upvoted 1 times   Jayendra0609 1 month, 1 week ago Selected Answer: C Since the data in S3 is static while other data is dynamic. And caching dynamic data doesn't make sense since it will be changing every time. So rather than caching we can use edge locations of Global Accelerator to reduce latency. upvoted 3 times   Clouddon 2 weeks, 4 days ago I will support option C because the combination of CloudFront and Global Accelerator as described in answer C is better. Note: CloudFront uses Edge Locations to cache content (improve performance) while Global Accelerator uses Edge Locations to find an optimal pathway to the nearest regional endpoint (reduce latency for the static data and dynamic data). upvoted 1 times   Clouddon 2 weeks, 4 days ago The above explanation is correct however, the answer cannot be option C (my bad) The correct answer is A. AWS says that Global Accelerator is an acceleration at network level AWS Global Accelerator is a networking service that improves the performance, reliability and security of your online applications using AWS Global Infrastructure. AWS Global Accelerator can be deployed in front of your Network Load Balancers, Application Load Balancers, AWS EC2 instances, and Elastic IPs, any of which could serve as Regional endpoints for your application. (This implies that Cloudfront is not part of the endpoints that can be used by Global accelerator which only provide security of your online applications ) except someone can proof otherwise. upvoted 1 times   premnick 1 month, 2 weeks ago How do we pass the exam if there are lots of questions where 2 options seems to be correct? I dont think Amazon should provide such options where candidates are only left to make a guess. upvoted 3 times   ibu007 1 month, 2 weeks ago Cloudfront: Speeds up distribution of static and dynamic content through its worldwide network of edge locations providing low latency to deliver the best performance through cached data. upvoted 1 times   jaydesai8 1 month, 2 weeks ago Selected Answer: A makes sense upvoted 1 times   Mia2009687 2 months ago Answer is C As the company wants to improve both static and dynamic delivery. Check the screen shot in the following article. It shows the structure of this scenario. https://tutorialsdojo.com/aws-global-accelerator-vs-amazon-cloudfront/ upvoted 4 times   Mary_Matic 2 weeks, 1 day ago Exactly, the key is "both static and dyanamic data" the right answer is C - otherwise it could have been A for statis content s3 with cloudfront upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: A Option A: Creating an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins is a valid approach. CloudFront is a content delivery network (CDN) that caches and delivers content from edge locations, improving performance and reducing latency. By configuring CloudFront to have the S3 bucket as an origin for static data and the ALB as an origin for dynamic data, the company can benefit from CloudFront's caching and distribution capabilities. Routing traffic to the CloudFront distribution through Route 53 ensures that requests are directed to the nearest edge location, further enhancing performance and reducing latency. upvoted 4 times   Mehkay 2 months, 1 week ago Selected Answer: A Makes sense upvoted 1 times   thaotnt 2 months, 1 week ago Selected Answer: A Using CloudFront upvoted 1 times Question #13 Topic 1 A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions. Which solution will meet these requirements with the LEAST operational overhead? A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule. B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule. C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials. D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets. Correct Answer: A Community vote distribution A (100%)   rein_chau Highly Voted  10 months, 2 weeks ago Selected Answer: A A is correct. https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/ upvoted 18 times   PhucVuu Highly Voted  4 months, 3 weeks ago Selected Answer: A Keywords: - rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions - LEAST operational overhead A: Correct - AWS Secrets Manager supports - Encrypt credential for RDS, DocumentDb, Redshift, other DBs and key/value secret. - multi-region replication. - Remote base on schedule B: Incorrect - Secure string parameter only apply for Parameter Store. All the data in AWS Secrets Manager is encrypted C: Incorrect - don't mention about replicate S3 across region. D: Incorrect - So many steps compare to answer A =)) upvoted 6 times   TariqKipkemei Most Recent  4 weeks ago Selected Answer: A 'The company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions' = AWS Secrets Manager upvoted 1 times   miki111 1 month, 1 week ago Option A MET THE REQUIREMENT upvoted 1 times   cookieMr 2 months, 1 week ago Selected Answer: A Option A: Storing the credentials as secrets in AWS Secrets Manager provides a dedicated service for secure and centralized management of secrets. By using multi-Region secret replication, the company ensures that the secrets are available in the required Regions for rotation. Secrets Manager also provides built-in functionality to rotate secrets automatically on a defined schedule, reducing operational overhead. This automation simplifies the process of rotating credentials for the Amazon RDS for MySQL databases during monthly maintenance activities. upvoted 4 times   Bmarodi 2 months, 3 weeks ago Selected Answer: A A is correct answer. upvoted 1 times   Musti35 4 months, 2 weeks ago Selected Answer: A https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/ With Secrets Manager, you can store, retrieve, manage, and rotate your secrets, including database credentials, API keys, and other secrets. When you create a secret using Secrets Manager, it’s created and managed in a Region of your choosing. Although scoping secrets to a Region is a security best practice, there are scenarios such as disaster recovery and cross-Regional redundancy that require replication of secrets across Regions. Secrets Manager now makes it possible for you to easily replicate your secrets to one or more Regions to support these scenarios. upvoted 3 times   linux_admin 4 months, 3 weeks ago Selected Answer: A A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule. This solution is the best option for meeting the requirements with the least operational overhead. AWS Secrets Manager is designed specifically for managing and rotating secrets like database credentials. Using multi-Region secret replication, you can easily replicate the secrets across the required AWS Regions. Additionally, Secrets Manager allows you to configure automatic secret rotation on a schedule, further reducing the operational overhead. upvoted 1 times   cheese929 6 months, 1 week ago Selected Answer: A A is correct. upvoted 1 times   BlueVolcano1 7 months, 1 week ago Selected Answer: A It's A, as Secrets Manager does support replicating secrets into multiple AWS Regions: https://docs.aws.amazon.com/secretsmanager/latest/userguide/create-manage-multi-region-secrets.html upvoted 3 times   Abdel42 7 months, 2 weeks ago Selected Answer: A it's A, here the question specify that we want the LEAST overhead upvoted 2 times   MichaelCarrasco 6 months, 1 week ago https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/ upvoted 1 times   SilentMilli 7 months, 2 weeks ago Selected Answer: A AWS Secrets Manager is a secrets management service that enables you to store, manage, and rotate secrets such as database credentials, API keys, and SSH keys. Secrets Manager can help you minimize the operational overhead of rotating credentials for your Amazon RDS for MySQL databases across multiple Regions. With Secrets Manager, you can store the credentials as secrets and use multi-Region secret replication to replicate the secrets to the required Regions. You can then configure Secrets Manager to rotate the secrets on a schedule so that the credentials are rotated automatically without the need for manual intervention. This can help reduce the risk of secrets being compromised and minimize the operational overhead of credential management. upvoted 3 times   Buruguduystunstugudunstuy 8 months ago Selected Answer: A Option A, storing the credentials as secrets in AWS Secrets Manager and using multi-Region secret replication for the required Regions, and configuring Secrets Manager to rotate the secrets on a schedule, would

Use Quizgecko on...
Browser
Browser