AWS Certified Data Engineer-page2.pdf
Document Details
Related
- PCSII Depression/Anxiety/Strong Emotions 2024 Document
- A Concise History of the World: A New World of Connections (1500-1800)
- Human Bio Test PDF
- University of Santo Tomas Pre-Laboratory Discussion of LA No. 1 PDF
- Vertebrate Pest Management PDF
- Lg 5 International Environmental Laws, Treaties, Protocols, and Conventions
Full Transcript
10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics - Expert Verified, Online, Free....
10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics - Expert Verified, Online, Free. Custom View Settings Question #51 Topic 1 A data engineer must orchestrate a data pipeline that consists of one AWS Lambda function and one AWS Glue job. The solution must integrate with AWS services. Which solution will meet these requirements with the LEAST management overhead? A. Use an AWS Step Functions workflow that includes a state machine. Configure the state machine to run the Lambda function and then the AWS Glue job. B. Use an Apache Airflow workflow that is deployed on an Amazon EC2 instance. Define a directed acyclic graph (DAG) in which the first task is to call the Lambda function and the second task is to call the AWS Glue job. C. Use an AWS Glue workflow to run the Lambda function and then the AWS Glue job. D. Use an Apache Airflow workflow that is deployed on Amazon Elastic Kubernetes Service (Amazon EKS). Define a directed acyclic graph (DAG) in which the first task is to call the Lambda function and the second task is to call the AWS Glue job. Question #52 Topic 1 A company needs to set up a data catalog and metadata management for data sources that run in the AWS Cloud. The company will use the data catalog to maintain the metadata of all the objects that are in a set of data stores. The data stores include structured sources such as Amazon RDS and Amazon Redshift. The data stores also include semistructured sources such as JSON files and.xml files that are stored in Amazon S3. The company needs a solution that will update the data catalog on a regular basis. The solution also must detect changes to the source metadata. Which solution will meet these requirements with the LEAST operational overhead? A. Use Amazon Aurora as the data catalog. Create AWS Lambda functions that will connect to the data catalog. Configure the Lambda functions to gather the metadata information from multiple sources and to update the Aurora data catalog. Schedule the Lambda functions to run periodically. B. Use the AWS Glue Data Catalog as the central metadata repository. Use AWS Glue crawlers to connect to multiple data stores and to update the Data Catalog with metadata changes. Schedule the crawlers to run periodically to update the metadata catalog. C. Use Amazon DynamoDB as the data catalog. Create AWS Lambda functions that will connect to the data catalog. Configure the Lambda functions to gather the metadata information from multiple sources and to update the DynamoDB data catalog. Schedule the Lambda functions to run periodically. D. Use the AWS Glue Data Catalog as the central metadata repository. Extract the schema for Amazon RDS and Amazon Redshift sources, and build the Data Catalog. Use AWS Glue crawlers for data that is in Amazon S3 to infer the schema and to automatically update the Data Catalog. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 1/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #53 Topic 1 A company stores data from an application in an Amazon DynamoDB table that operates in provisioned capacity mode. The workloads of the application have predictable throughput load on a regular schedule. Every Monday, there is an immediate increase in activity early in the morning. The application has very low usage during weekends. The company must ensure that the application performs consistently during peak usage times. Which solution will meet these requirements in the MOST cost-effective way? A. Increase the provisioned capacity to the maximum capacity that is currently present during peak load times. B. Divide the table into two tables. Provision each table with half of the provisioned capacity of the original table. Spread queries evenly across both tables. C. Use AWS Application Auto Scaling to schedule higher provisioned capacity for peak usage times. Schedule lower capacity during off-peak times. D. Change the capacity mode from provisioned to on-demand. Configure the table to scale up and scale down based on the load on the table. Question #54 Topic 1 A company is planning to migrate on-premises Apache Hadoop clusters to Amazon EMR. The company also needs to migrate a data catalog into a persistent storage solution. The company currently stores the data catalog in an on-premises Apache Hive metastore on the Hadoop clusters. The company requires a serverless solution to migrate the data catalog. Which solution will meet these requirements MOST cost-effectively? A. Use AWS Database Migration Service (AWS DMS) to migrate the Hive metastore into Amazon S3. Configure AWS Glue Data Catalog to scan Amazon S3 to produce the data catalog. B. Configure a Hive metastore in Amazon EMR. Migrate the existing on-premises Hive metastore into Amazon EMR. Use AWS Glue Data Catalog to store the company's data catalog as an external data catalog. C. Configure an external Hive metastore in Amazon EMR. Migrate the existing on-premises Hive metastore into Amazon EMR. Use Amazon Aurora MySQL to store the company's data catalog. D. Configure a new Hive metastore in Amazon EMR. Migrate the existing on-premises Hive metastore into Amazon EMR. Use the new metastore as the company's data catalog. Question #55 Topic 1 A company uses an Amazon Redshift provisioned cluster as its database. The Redshift cluster has five reserved ra3.4xlarge nodes and uses key distribution. A data engineer notices that one of the nodes frequently has a CPU load over 90%. SQL Queries that run on the node are queued. The other four nodes usually have a CPU load under 15% during daily operations. The data engineer wants to maintain the current number of compute nodes. The data engineer also wants to balance the load more evenly across all five compute nodes. Which solution will meet these requirements? A. Change the sort key to be the data column that is most often used in a WHERE clause of the SQL SELECT statement. B. Change the distribution key to the table column that has the largest dimension. C. Upgrade the reserved node from ra3.4xlarge to ra3.16xlarge. D. Change the primary key to be the data column that is most often used in a WHERE clause of the SQL SELECT statement. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 2/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #56 Topic 1 A security company stores IoT data that is in JSON format in an Amazon S3 bucket. The data structure can change when the company upgrades the IoT devices. The company wants to create a data catalog that includes the IoT data. The company's analytics department will use the data catalog to index the data. Which solution will meet these requirements MOST cost-effectively? A. Create an AWS Glue Data Catalog. Configure an AWS Glue Schema Registry. Create a new AWS Glue workload to orchestrate the ingestion of the data that the analytics department will use into Amazon Redshift Serverless. B. Create an Amazon Redshift provisioned cluster. Create an Amazon Redshift Spectrum database for the analytics department to explore the data that is in Amazon S3. Create Redshift stored procedures to load the data into Amazon Redshift. C. Create an Amazon Athena workgroup. Explore the data that is in Amazon S3 by using Apache Spark through Athena. Provide the Athena workgroup schema and tables to the analytics department. D. Create an AWS Glue Data Catalog. Configure an AWS Glue Schema Registry. Create AWS Lambda user defined functions (UDFs) by using the Amazon Redshift Data API. Create an AWS Step Functions job to orchestrate the ingestion of the data that the analytics department will use into Amazon Redshift Serverless. Question #57 Topic 1 A company stores details about transactions in an Amazon S3 bucket. The company wants to log all writes to the S3 bucket into another S3 bucket that is in the same AWS Region. Which solution will meet this requirement with the LEAST operational effort? A. Configure an S3 Event Notifications rule for all activities on the transactions S3 bucket to invoke an AWS Lambda function. Program the Lambda function to write the event to Amazon Kinesis Data Firehose. Configure Kinesis Data Firehose to write the event to the logs S3 bucket. B. Create a trail of management events in AWS CloudTraiL. Configure the trail to receive data from the transactions S3 bucket. Specify an empty prefix and write-only events. Specify the logs S3 bucket as the destination bucket. C. Configure an S3 Event Notifications rule for all activities on the transactions S3 bucket to invoke an AWS Lambda function. Program the Lambda function to write the events to the logs S3 bucket. D. Create a trail of data events in AWS CloudTraiL. Configure the trail to receive data from the transactions S3 bucket. Specify an empty prefix and write-only events. Specify the logs S3 bucket as the destination bucket. Question #58 Topic 1 A data engineer needs to maintain a central metadata repository that users access through Amazon EMR and Amazon Athena queries. The repository needs to provide the schema and properties of many tables. Some of the metadata is stored in Apache Hive. The data engineer needs to import the metadata from Hive into the central metadata repository. Which solution will meet these requirements with the LEAST development effort? A. Use Amazon EMR and Apache Ranger. B. Use a Hive metastore on an EMR cluster. C. Use the AWS Glue Data Catalog. D. Use a metastore on an Amazon RDS for MySQL DB instance. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 3/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #59 Topic 1 A company needs to build a data lake in AWS. The company must provide row-level data access and column-level data access to specific teams. The teams will access the data by using Amazon Athena, Amazon Redshift Spectrum, and Apache Hive from Amazon EMR. Which solution will meet these requirements with the LEAST operational overhead? A. Use Amazon S3 for data lake storage. Use S3 access policies to restrict data access by rows and columns. Provide data access through Amazon S3. B. Use Amazon S3 for data lake storage. Use Apache Ranger through Amazon EMR to restrict data access by rows and columns. Provide data access by using Apache Pig. C. Use Amazon Redshift for data lake storage. Use Redshift security policies to restrict data access by rows and columns. Provide data access by using Apache Spark and Amazon Athena federated queries. D. Use Amazon S3 for data lake storage. Use AWS Lake Formation to restrict data access by rows and columns. Provide data access through AWS Lake Formation. Question #60 Topic 1 An airline company is collecting metrics about flight activities for analytics. The company is conducting a proof of concept (POC) test to show how analytics can provide insights that the company can use to increase on-time departures. The POC test uses objects in Amazon S3 that contain the metrics in.csv format. The POC test uses Amazon Athena to query the data. The data is partitioned in the S3 bucket by date. As the amount of data increases, the company wants to optimize the storage solution to improve query performance. Which combination of solutions will meet these requirements? (Choose two.) A. Add a randomized string to the beginning of the keys in Amazon S3 to get more throughput across partitions. B. Use an S3 bucket that is in the same account that uses Athena to query the data. C. Use an S3 bucket that is in the same AWS Region where the company runs Athena queries. D. Preprocess the.csv data to JSON format by fetching only the document keys that the query requires. E. Preprocess the.csv data to Apache Parquet format by fetching only the data blocks that are needed for predicates. Question #61 Topic 1 A company uses Amazon RDS for MySQL as the database for a critical application. The database workload is mostly writes, with a small number of reads. A data engineer notices that the CPU utilization of the DB instance is very high. The high CPU utilization is slowing down the application. The data engineer must reduce the CPU utilization of the DB Instance. Which actions should the data engineer take to meet this requirement? (Choose two.) A. Use the Performance Insights feature of Amazon RDS to identify queries that have high CPU utilization. Optimize the problematic queries. B. Modify the database schema to include additional tables and indexes. C. Reboot the RDS DB instance once each week. D. Upgrade to a larger instance size. E. Implement caching to reduce the database query load. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 4/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #62 Topic 1 A company has used an Amazon Redshift table that is named Orders for 6 months. The company performs weekly updates and deletes on the table. The table has an interleaved sort key on a column that contains AWS Regions. The company wants to reclaim disk space so that the company will not run out of storage space. The company also wants to analyze the sort key column. Which Amazon Redshift command will meet these requirements? A. VACUUM FULL Orders B. VACUUM DELETE ONLY Orders C. VACUUM REINDEX Orders D. VACUUM SORT ONLY Orders Question #63 Topic 1 A manufacturing company wants to collect data from sensors. A data engineer needs to implement a solution that ingests sensor data in near real time. The solution must store the data to a persistent data store. The solution must store the data in nested JSON format. The company must have the ability to query from the data store with a latency of less than 10 milliseconds. Which solution will meet these requirements with the LEAST operational overhead? A. Use a self-hosted Apache Kafka cluster to capture the sensor data. Store the data in Amazon S3 for querying. B. Use AWS Lambda to process the sensor data. Store the data in Amazon S3 for querying. C. Use Amazon Kinesis Data Streams to capture the sensor data. Store the data in Amazon DynamoDB for querying. D. Use Amazon Simple Queue Service (Amazon SQS) to buffer incoming sensor data. Use AWS Glue to store the data in Amazon RDS for querying. Question #64 Topic 1 A company stores data in a data lake that is in Amazon S3. Some data that the company stores in the data lake contains personally identifiable information (PII). Multiple user groups need to access the raw data. The company must ensure that user groups can access only the PII that they require. Which solution will meet these requirements with the LEAST effort? A. Use Amazon Athena to query the data. Set up AWS Lake Formation and create data filters to establish levels of access for the company's IAM roles. Assign each user to the IAM role that matches the user's PII access requirements. B. Use Amazon QuickSight to access the data. Use column-level security features in QuickSight to limit the PII that users can retrieve from Amazon S3 by using Amazon Athena. Define QuickSight access levels based on the PII access requirements of the users. C. Build a custom query builder UI that will run Athena queries in the background to access the data. Create user groups in Amazon Cognito. Assign access levels to the user groups based on the PII access requirements of the users. D. Create IAM roles that have different levels of granular access. Assign the IAM roles to IAM user groups. Use an identity-based policy to assign access levels to user groups at the column level. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 5/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #65 Topic 1 A data engineer must build an extract, transform, and load (ETL) pipeline to process and load data from 10 source systems into 10 tables that are in an Amazon Redshift database. All the source systems generate.csv, JSON, or Apache Parquet files every 15 minutes. The source systems all deliver files into one Amazon S3 bucket. The file sizes range from 10 MB to 20 GB. The ETL pipeline must function correctly despite changes to the data schema. Which data pipeline solutions will meet these requirements? (Choose two.) A. Use an Amazon EventBridge rule to run an AWS Glue job every 15 minutes. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables. B. Use an Amazon EventBridge rule to invoke an AWS Glue workflow job every 15 minutes. Configure the AWS Glue workflow to have an on- demand trigger that runs an AWS Glue crawler and then runs an AWS Glue job when the crawler finishes running successfully. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables. C. Configure an AWS Lambda function to invoke an AWS Glue crawler when a file is loaded into the S3 bucket. Configure an AWS Glue job to process and load the data into the Amazon Redshift tables. Create a second Lambda function to run the AWS Glue job. Create an Amazon EventBridge rule to invoke the second Lambda function when the AWS Glue crawler finishes running successfully. D. Configure an AWS Lambda function to invoke an AWS Glue workflow when a file is loaded into the S3 bucket. Configure the AWS Glue workflow to have an on-demand trigger that runs an AWS Glue crawler and then runs an AWS Glue job when the crawler finishes running successfully. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables. E. Configure an AWS Lambda function to invoke an AWS Glue job when a file is loaded into the S3 bucket. Configure the AWS Glue job to read the files from the S3 bucket into an Apache Spark DataFrame. Configure the AWS Glue job to also put smaller partitions of the DataFrame into an Amazon Kinesis Data Firehose delivery stream. Configure the delivery stream to load data into the Amazon Redshift tables. Question #66 Topic 1 A financial company wants to use Amazon Athena to run on-demand SQL queries on a petabyte-scale dataset to support a business intelligence (BI) application. An AWS Glue job that runs during non-business hours updates the dataset once every day. The BI application has a standard data refresh frequency of 1 hour to comply with company policies. A data engineer wants to cost optimize the company's use of Amazon Athena without adding any additional infrastructure costs. Which solution will meet these requirements with the LEAST operational overhead? A. Configure an Amazon S3 Lifecycle policy to move data to the S3 Glacier Deep Archive storage class after 1 day. B. Use the query result reuse feature of Amazon Athena for the SQL queries. C. Add an Amazon ElastiCache cluster between the BI application and Athena. D. Change the format of the files that are in the dataset to Apache Parquet. Question #67 Topic 1 A company's data engineer needs to optimize the performance of table SQL queries. The company stores data in an Amazon Redshift cluster. The data engineer cannot increase the size of the cluster because of budget constraints. The company stores the data in multiple tables and loads the data by using the EVEN distribution style. Some tables are hundreds of gigabytes in size. Other tables are less than 10 MB in size. Which solution will meet these requirements? A. Keep using the EVEN distribution style for all tables. Specify primary and foreign keys for all tables. B. Use the ALL distribution style for large tables. Specify primary and foreign keys for all tables. C. Use the ALL distribution style for rarely updated small tables. Specify primary and foreign keys for all tables. D. Specify a combination of distribution, sort, and partition keys for all tables. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 6/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #68 Topic 1 A company receives.csv files that contain physical address data. The data is in columns that have the following names: Door_No, Street_Name, City, and Zip_Code. The company wants to create a single column to store these values in the following format: Which solution will meet this requirement with the LEAST coding effort? A. Use AWS Glue DataBrew to read the files. Use the NEST_TO_ARRAY transformation to create the new column. B. Use AWS Glue DataBrew to read the files. Use the NEST_TO_MAP transformation to create the new column. C. Use AWS Glue DataBrew to read the files. Use the PIVOT transformation to create the new column. D. Write a Lambda function in Python to read the files. Use the Python data dictionary type to create the new column. Question #69 Topic 1 A company receives call logs as Amazon S3 objects that contain sensitive customer information. The company must protect the S3 objects by using encryption. The company must also use encryption keys that only specific employees can access. Which solution will meet these requirements with the LEAST effort? A. Use an AWS CloudHSM cluster to store the encryption keys. Configure the process that writes to Amazon S3 to make calls to CloudHSM to encrypt and decrypt the objects. Deploy an IAM policy that restricts access to the CloudHSM cluster. B. Use server-side encryption with customer-provided keys (SSE-C) to encrypt the objects that contain customer information. Restrict access to the keys that encrypt the objects. C. Use server-side encryption with AWS KMS keys (SSE-KMS) to encrypt the objects that contain customer information. Configure an IAM policy that restricts access to the KMS keys that encrypt the objects. D. Use server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the objects that contain customer information. Configure an IAM policy that restricts access to the Amazon S3 managed keys that encrypt the objects. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 7/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #70 Topic 1 A company stores petabytes of data in thousands of Amazon S3 buckets in the S3 Standard storage class. The data supports analytics workloads that have unpredictable and variable data access patterns. The company does not access some data for months. However, the company must be able to retrieve all data within milliseconds. The company needs to optimize S3 storage costs. Which solution will meet these requirements with the LEAST operational overhead? A. Use S3 Storage Lens standard metrics to determine when to move objects to more cost-optimized storage classes. Create S3 Lifecycle policies for the S3 buckets to move objects to cost-optimized storage classes. Continue to refine the S3 Lifecycle policies in the future to optimize storage costs. B. Use S3 Storage Lens activity metrics to identify S3 buckets that the company accesses infrequently. Configure S3 Lifecycle rules to move objects from S3 Standard to the S3 Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier storage classes based on the age of the data. C. Use S3 Intelligent-Tiering. Activate the Deep Archive Access tier. D. Use S3 Intelligent-Tiering. Use the default access tier. Question #71 Topic 1 During a security review, a company identified a vulnerability in an AWS Glue job. The company discovered that credentials to access an Amazon Redshift cluster were hard coded in the job script. A data engineer must remediate the security vulnerability in the AWS Glue job. The solution must securely store the credentials. Which combination of steps should the data engineer take to meet these requirements? (Choose two.) A. Store the credentials in the AWS Glue job parameters. B. Store the credentials in a configuration file that is in an Amazon S3 bucket. C. Access the credentials from a configuration file that is in an Amazon S3 bucket by using the AWS Glue job. D. Store the credentials in AWS Secrets Manager. E. Grant the AWS Glue job IAM role access to the stored credentials. Question #72 Topic 1 A data engineer uses Amazon Redshift to run resource-intensive analytics processes once every month. Every month, the data engineer creates a new Redshift provisioned cluster. The data engineer deletes the Redshift provisioned cluster after the analytics processes are complete every month. Before the data engineer deletes the cluster each month, the data engineer unloads backup data from the cluster to an Amazon S3 bucket. The data engineer needs a solution to run the monthly analytics processes that does not require the data engineer to manage the infrastructure manually. Which solution will meet these requirements with the LEAST operational overhead? A. Use Amazon Step Functions to pause the Redshift cluster when the analytics processes are complete and to resume the cluster to run new processes every month. B. Use Amazon Redshift Serverless to automatically process the analytics workload. C. Use the AWS CLI to automatically process the analytics workload. D. Use AWS CloudFormation templates to automatically process the analytics workload. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 8/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #73 Topic 1 A company receives a daily file that contains customer data in.xls format. The company stores the file in Amazon S3. The daily file is approximately 2 GB in size. A data engineer concatenates the column in the file that contains customer first names and the column that contains customer last names. The data engineer needs to determine the number of distinct customers in the file. Which solution will meet this requirement with the LEAST operational effort? A. Create and run an Apache Spark job in an AWS Glue notebook. Configure the job to read the S3 file and calculate the number of distinct customers. B. Create an AWS Glue crawler to create an AWS Glue Data Catalog of the S3 file. Run SQL queries from Amazon Athena to calculate the number of distinct customers. C. Create and run an Apache Spark job in Amazon EMR Serverless to calculate the number of distinct customers. D. Use AWS Glue DataBrew to create a recipe that uses the COUNT_DISTINCT aggregate function to calculate the number of distinct customers. Question #74 Topic 1 A healthcare company uses Amazon Kinesis Data Streams to stream real-time health data from wearable devices, hospital equipment, and patient records. A data engineer needs to find a solution to process the streaming data. The data engineer needs to store the data in an Amazon Redshift Serverless warehouse. The solution must support near real-time analytics of the streaming data and the previous day's data. Which solution will meet these requirements with the LEAST operational overhead? A. Load data into Amazon Kinesis Data Firehose. Load the data into Amazon Redshift. B. Use the streaming ingestion feature of Amazon Redshift. C. Load the data into Amazon S3. Use the COPY command to load the data into Amazon Redshift. D. Use the Amazon Aurora zero-ETL integration with Amazon Redshift. Question #75 Topic 1 A data engineer needs to use an Amazon QuickSight dashboard that is based on Amazon Athena queries on data that is stored in an Amazon S3 bucket. When the data engineer connects to the QuickSight dashboard, the data engineer receives an error message that indicates insufficient permissions. Which factors could cause to the permissions-related errors? (Choose two.) A. There is no connection between QuickSight and Athena. B. The Athena tables are not cataloged. C. QuickSight does not have access to the S3 bucket. D. QuickSight does not have access to decrypt S3 data. E. There is no IAM role assigned to QuickSight. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 9/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #76 Topic 1 A company stores datasets in JSON format and.csv format in an Amazon S3 bucket. The company has Amazon RDS for Microsoft SQL Server databases, Amazon DynamoDB tables that are in provisioned capacity mode, and an Amazon Redshift cluster. A data engineering team must develop a solution that will give data scientists the ability to query all data sources by using syntax similar to SQL. Which solution will meet these requirements with the LEAST operational overhead? A. Use AWS Glue to crawl the data sources. Store metadata in the AWS Glue Data Catalog. Use Amazon Athena to query the data. Use SQL for structured data sources. Use PartiQL for data that is stored in JSON format. B. Use AWS Glue to crawl the data sources. Store metadata in the AWS Glue Data Catalog. Use Redshift Spectrum to query the data. Use SQL for structured data sources. Use PartiQL for data that is stored in JSON format. C. Use AWS Glue to crawl the data sources. Store metadata in the AWS Glue Data Catalog. Use AWS Glue jobs to transform data that is in JSON format to Apache Parquet or.csv format. Store the transformed data in an S3 bucket. Use Amazon Athena to query the original and transformed data from the S3 bucket. D. Use AWS Lake Formation to create a data lake. Use Lake Formation jobs to transform the data from all data sources to Apache Parquet format. Store the transformed data in an S3 bucket. Use Amazon Athena or Redshift Spectrum to query the data. Question #77 Topic 1 A data engineer is configuring Amazon SageMaker Studio to use AWS Glue interactive sessions to prepare data for machine learning (ML) models. The data engineer receives an access denied error when the data engineer tries to prepare the data by using SageMaker Studio. Which change should the engineer make to gain access to SageMaker Studio? A. Add the AWSGlueServiceRole managed policy to the data engineer's IAM user. B. Add a policy to the data engineer's IAM user that includes the sts:AssumeRole action for the AWS Glue and SageMaker service principals in the trust policy. C. Add the AmazonSageMakerFullAccess managed policy to the data engineer's IAM user. D. Add a policy to the data engineer's IAM user that allows the sts:AddAssociation action for the AWS Glue and SageMaker service principals in the trust policy. Question #78 Topic 1 A company extracts approximately 1 TB of data every day from data sources such as SAP HANA, Microsoft SQL Server, MongoDB, Apache Kafka, and Amazon DynamoDB. Some of the data sources have undefined data schemas or data schemas that change. A data engineer must implement a solution that can detect the schema for these data sources. The solution must extract, transform, and load the data to an Amazon S3 bucket. The company has a service level agreement (SLA) to load the data into the S3 bucket within 15 minutes of data creation. Which solution will meet these requirements with the LEAST operational overhead? A. Use Amazon EMR to detect the schema and to extract, transform, and load the data into the S3 bucket. Create a pipeline in Apache Spark. B. Use AWS Glue to detect the schema and to extract, transform, and load the data into the S3 bucket. Create a pipeline in Apache Spark. C. Create a PySpark program in AWS Lambda to extract, transform, and load the data into the S3 bucket. D. Create a stored procedure in Amazon Redshift to detect the schema and to extract, transform, and load the data into a Redshift Spectrum table. Access the table from Amazon S3. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 10/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #79 Topic 1 A company has multiple applications that use datasets that are stored in an Amazon S3 bucket. The company has an ecommerce application that generates a dataset that contains personally identifiable information (PII). The company has an internal analytics application that does not require access to the PII. To comply with regulations, the company must not share PII unnecessarily. A data engineer needs to implement a solution that with redact PII dynamically, based on the needs of each application that accesses the dataset. Which solution will meet the requirements with the LEAST operational overhead? A. Create an S3 bucket policy to limit the access each application has. Create multiple copies of the dataset. Give each dataset copy the appropriate level of redaction for the needs of the application that accesses the copy. B. Create an S3 Object Lambda endpoint. Use the S3 Object Lambda endpoint to read data from the S3 bucket. Implement redaction logic within an S3 Object Lambda function to dynamically redact PII based on the needs of each application that accesses the data. C. Use AWS Glue to transform the data for each application. Create multiple copies of the dataset. Give each dataset copy the appropriate level of redaction for the needs of the application that accesses the copy. D. Create an API Gateway endpoint that has custom authorizers. Use the API Gateway endpoint to read data from the S3 bucket. Initiate a REST API call to dynamically redact PII based on the needs of each application that accesses the data. Question #80 Topic 1 A data engineer needs to build an extract, transform, and load (ETL) job. The ETL job will process daily incoming.csv files that users upload to an Amazon S3 bucket. The size of each S3 object is less than 100 MB. Which solution will meet these requirements MOST cost-effectively? A. Write a custom Python application. Host the application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. B. Write a PySpark ETL script. Host the script on an Amazon EMR cluster. C. Write an AWS Glue PySpark job. Use Apache Spark to transform the data. D. Write an AWS Glue Python shell job. Use pandas to transform the data. Question #81 Topic 1 A data engineer creates an AWS Glue Data Catalog table by using an AWS Glue crawler that is named Orders. The data engineer wants to add the following new partitions: s3://transactions/orders/order_date=2023-01-01 s3://transactions/orders/order_date=2023-01-02 The data engineer must edit the metadata to include the new partitions in the table without scanning all the folders and files in the location of the table. Which data definition language (DDL) statement should the data engineer use in Amazon Athena? A. ALTER TABLE Orders ADD PARTITION(order_date=’2023-01-01’) LOCATION ‘s3://transactions/orders/order_date=2023-01-01’; ALTER TABLE Orders ADD PARTITION(order_date=’2023-01-02’) LOCATION ‘s3://transactions/orders/order_date=2023-01-02’; B. MSCK REPAIR TABLE Orders; C. REPAIR TABLE Orders; D. ALTER TABLE Orders MODIFY PARTITION(order_date=’2023-01-01’) LOCATION ‘s3://transactions/orders/2023-01-01’; ALTER TABLE Orders MODIFY PARTITION(order_date=’2023-01-02’) LOCATION ‘s3://transactions/orders/2023-01-02’; https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 11/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #82 Topic 1 A company stores 10 to 15 TB of uncompressed.csv files in Amazon S3. The company is evaluating Amazon Athena as a one-time query engine. The company wants to transform the data to optimize query runtime and storage costs. Which file format and compression solution will meet these requirements for Athena queries? A..csv format compressed with zip B. JSON format compressed with bzip2 C. Apache Parquet format compressed with Snappy D. Apache Avro format compressed with LZO Question #83 Topic 1 A company uses Apache Airflow to orchestrate the company's current on-premises data pipelines. The company runs SQL data quality check tasks as part of the pipelines. The company wants to migrate the pipelines to AWS and to use AWS managed services. Which solution will meet these requirements with the LEAST amount of refactoring? A. Setup AWS Outposts in the AWS Region that is nearest to the location where the company uses Airflow. Migrate the servers into Outposts hosted Amazon EC2 instances. Update the pipelines to interact with the Outposts hosted EC2 instances instead of the on-premises pipelines. B. Create a custom Amazon Machine Image (AMI) that contains the Airflow application and the code that the company needs to migrate. Use the custom AMI to deploy Amazon EC2 instances. Update the network connections to interact with the newly deployed EC2 instances. C. Migrate the existing Airflow orchestration configuration into Amazon Managed Workflows for Apache Airflow (Amazon MWAA). Create the data quality checks during the ingestion to validate the data quality by using SQL tasks in Airflow. D. Convert the pipelines to AWS Step Functions workflows. Recreate the data quality checks in SQL as Python based AWS Lambda functions. Question #84 Topic 1 A company uses Amazon EMR as an extract, transform, and load (ETL) pipeline to transform data that comes from multiple sources. A data engineer must orchestrate the pipeline to maximize performance. Which AWS service will meet this requirement MOST cost effectively? A. Amazon EventBridge B. Amazon Managed Workflows for Apache Airflow (Amazon MWAA) C. AWS Step Functions D. AWS Glue Workflows https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 12/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #85 Topic 1 An online retail company stores Application Load Balancer (ALB) access logs in an Amazon S3 bucket. The company wants to use Amazon Athena to query the logs to analyze traffic patterns. A data engineer creates an unpartitioned table in Athena. As the amount of the data gradually increases, the response time for queries also increases. The data engineer wants to improve the query performance in Athena. Which solution will meet these requirements with the LEAST operational effort? A. Create an AWS Glue job that determines the schema of all ALB access logs and writes the partition metadata to AWS Glue Data Catalog. B. Create an AWS Glue crawler that includes a classifier that determines the schema of all ALB access logs and writes the partition metadata to AWS Glue Data Catalog. C. Create an AWS Lambda function to transform all ALB access logs. Save the results to Amazon S3 in Apache Parquet format. Partition the metadata. Use Athena to query the transformed data. D. Use Apache Hive to create bucketed tables. Use an AWS Lambda function to transform all ALB access logs. Question #86 Topic 1 A company has a business intelligence platform on AWS. The company uses an AWS Storage Gateway Amazon S3 File Gateway to transfer files from the company's on-premises environment to an Amazon S3 bucket. A data engineer needs to setup a process that will automatically launch an AWS Glue workflow to run a series of AWS Glue jobs when each file transfer finishes successfully. Which solution will meet these requirements with the LEAST operational overhead? A. Determine when the file transfers usually finish based on previous successful file transfers. Set up an Amazon EventBridge scheduled event to initiate the AWS Glue jobs at that time of day. B. Set up an Amazon EventBridge event that initiates the AWS Glue workflow after every successful S3 File Gateway file transfer event. C. Set up an on-demand AWS Glue workflow so that the data engineer can start the AWS Glue workflow when each file transfer is complete. D. Set up an AWS Lambda function that will invoke the AWS Glue Workflow. Set up an event for the creation of an S3 object as a trigger for the Lambda function. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 13/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #87 Topic 1 A retail company uses Amazon Aurora PostgreSQL to process and store live transactional data. The company uses an Amazon Redshift cluster for a data warehouse. An extract, transform, and load (ETL) job runs every morning to update the Redshift cluster with new data from the PostgreSQL database. The company has grown rapidly and needs to cost optimize the Redshift cluster. A data engineer needs to create a solution to archive historical data. The data engineer must be able to run analytics queries that effectively combine data from live transactional data in PostgreSQL, current data in Redshift, and archived historical data. The solution must keep only the most recent 15 months of data in Amazon Redshift to reduce costs. Which combination of steps will meet these requirements? (Choose two.) A. Configure the Amazon Redshift Federated Query feature to query live transactional data that is in the PostgreSQL database. B. Configure Amazon Redshift Spectrum to query live transactional data that is in the PostgreSQL database. C. Schedule a monthly job to copy data that is older than 15 months to Amazon S3 by using the UNLOAD command. Delete the old data from the Redshift cluster. Configure Amazon Redshift Spectrum to access historical data in Amazon S3. D. Schedule a monthly job to copy data that is older than 15 months to Amazon S3 Glacier Flexible Retrieval by using the UNLOAD command. Delete the old data from the Redshift cluster. Configure Redshift Spectrum to access historical data from S3 Glacier Flexible Retrieval. E. Create a materialized view in Amazon Redshift that combines live, current, and historical data from different sources. Question #88 Topic 1 A manufacturing company has many IoT devices in facilities around the world. The company uses Amazon Kinesis Data Streams to collect data from the devices. The data includes device ID, capture date, measurement type, measurement value, and facility ID. The company uses facility ID as the partition key. The company's operations team recently observed many WriteThroughputExceeded exceptions. The operations team found that some shards were heavily used but other shards were generally idle. How should the company resolve the issues that the operations team observed? A. Change the partition key from facility ID to a randomly generated key. B. Increase the number of shards. C. Archive the data on the producer's side. D. Change the partition key from facility ID to capture date. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 14/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #89 Topic 1 A data engineer wants to improve the performance of SQL queries in Amazon Athena that run against a sales data table. The data engineer wants to understand the execution plan of a specific SQL statement. The data engineer also wants to see the computational cost of each operation in a SQL query. Which statement does the data engineer need to run to meet these requirements? A. EXPLAIN SELECT * FROM sales; B. EXPLAIN ANALYZE FROM sales; C. EXPLAIN ANALYZE SELECT * FROM sales; D. EXPLAIN FROM sales; Question #90 Topic 1 A company plans to provision a log delivery stream within a VPC. The company configured the VPC flow logs to publish to Amazon CloudWatch Logs. The company needs to send the flow logs to Splunk in near real time for further analysis. Which solution will meet these requirements with the LEAST operational overhead? A. Configure an Amazon Kinesis Data Streams data stream to use Splunk as the destination. Create a CloudWatch Logs subscription filter to send log events to the data stream. B. Create an Amazon Kinesis Data Firehose delivery stream to use Splunk as the destination. Create a CloudWatch Logs subscription filter to send log events to the delivery stream. C. Create an Amazon Kinesis Data Firehose delivery stream to use Splunk as the destination. Create an AWS Lambda function to send the flow logs from CloudWatch Logs to the delivery stream. D. Configure an Amazon Kinesis Data Streams data stream to use Splunk as the destination. Create an AWS Lambda function to send the flow logs from CloudWatch Logs to the data stream. Question #91 Topic 1 A company has a data lake on AWS. The data lake ingests sources of data from business units. The company uses Amazon Athena for queries. The storage layer is Amazon S3 with an AWS Glue Data Catalog as a metadata repository. The company wants to make the data available to data scientists and business analysts. However, the company first needs to manage fine- grained, column-level data access for Athena based on the user roles and responsibilities. Which solution will meet these requirements? A. Set up AWS Lake Formation. Define security policy-based rules for the users and applications by IAM role in Lake Formation. B. Define an IAM resource-based policy for AWS Glue tables. Attach the same policy to IAM user groups. C. Define an IAM identity-based policy for AWS Glue tables. Attach the same policy to IAM roles. Associate the IAM roles with IAM groups that contain the users. D. Create a resource share in AWS Resource Access Manager (AWS RAM) to grant access to IAM users. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 15/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #92 Topic 1 A company has developed several AWS Glue extract, transform, and load (ETL) jobs to validate and transform data from Amazon S3. The ETL jobs load the data into Amazon RDS for MySQL in batches once every day. The ETL jobs use a DynamicFrame to read the S3 data. The ETL jobs currently process all the data that is in the S3 bucket. However, the company wants the jobs to process only the daily incremental data. Which solution will meet this requirement with the LEAST coding effort? A. Create an ETL job that reads the S3 file status and logs the status in Amazon DynamoDB. B. Enable job bookmarks for the ETL jobs to update the state after a run to keep track of previously processed data. C. Enable job metrics for the ETL jobs to help keep track of processed objects in Amazon CloudWatch. D. Configure the ETL jobs to delete processed objects from Amazon S3 after each run. Question #93 Topic 1 An online retail company has an application that runs on Amazon EC2 instances that are in a VPC. The company wants to collect flow logs for the VPC and analyze network traffic. Which solution will meet these requirements MOST cost-effectively? A. Publish flow logs to Amazon CloudWatch Logs. Use Amazon Athena for analytics. B. Publish flow logs to Amazon CloudWatch Logs. Use an Amazon OpenSearch Service cluster for analytics. C. Publish flow logs to Amazon S3 in text format. Use Amazon Athena for analytics. D. Publish flow logs to Amazon S3 in Apache Parquet format. Use Amazon Athena for analytics. Question #94 Topic 1 A retail company stores transactions, store locations, and customer information tables in four reserved ra3.4xlarge Amazon Redshift cluster nodes. All three tables use even table distribution. The company updates the store location table only once or twice every few years. A data engineer notices that Redshift queues are slowing down because the whole store location table is constantly being broadcast to all four compute nodes for most queries. The data engineer wants to speed up the query performance by minimizing the broadcasting of the store location table. Which solution will meet these requirements in the MOST cost-effective way? A. Change the distribution style of the store location table from EVEN distribution to ALL distribution. B. Change the distribution style of the store location table to KEY distribution based on the column that has the highest dimension. C. Add a join column named store_id into the sort key for all the tables. D. Upgrade the Redshift reserved node to a larger instance size in the same instance family. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 16/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #95 Topic 1 A company has a data warehouse that contains a table that is named Sales. The company stores the table in Amazon Redshift. The table includes a column that is named city_name. The company wants to query the table to find all rows that have a city_name that starts with "San" or "El". Which SQL query will meet this requirement? A. Select * from Sales where city_name ~ ‘$(San|El)*’; B. Select * from Sales where city_name ~ ‘^(San|El)*’; C. Select * from Sales where city_name ~’$(San&El)*’; D. Select * from Sales where city_name ~ ‘^(San&El)*’; Question #96 Topic 1 A company needs to send customer call data from its on-premises PostgreSQL database to AWS to generate near real-time insights. The solution must capture and load updates from operational data stores that run in the PostgreSQL database. The data changes continuously. A data engineer configures an AWS Database Migration Service (AWS DMS) ongoing replication task. The task reads changes in near real time from the PostgreSQL source database transaction logs for each table. The task then sends the data to an Amazon Redshift cluster for processing. The data engineer discovers latency issues during the change data capture (CDC) of the task. The data engineer thinks that the PostgreSQL source database is causing the high latency. Which solution will confirm that the PostgreSQL database is the source of the high latency? A. Use Amazon CloudWatch to monitor the DMS task. Examine the CDCIncomingChanges metric to identify delays in the CDC from the source database. B. Verify that logical replication of the source database is configured in the postgresql.conf configuration file. C. Enable Amazon CloudWatch Logs for the DMS endpoint of the source database. Check for error messages. D. Use Amazon CloudWatch to monitor the DMS task. Examine the CDCLatencySource metric to identify delays in the CDC from the source database. Question #97 Topic 1 A lab uses IoT sensors to monitor humidity, temperature, and pressure for a project. The sensors send 100 KB of data every 10 seconds. A downstream process will read the data from an Amazon S3 bucket every 30 seconds. Which solution will deliver the data to the S3 bucket with the LEAST latency? A. Use Amazon Kinesis Data Streams and Amazon Kinesis Data Firehose to deliver the data to the S3 bucket. Use the default buffer interval for Kinesis Data Firehose. B. Use Amazon Kinesis Data Streams to deliver the data to the S3 bucket. Configure the stream to use 5 provisioned shards. C. Use Amazon Kinesis Data Streams and call the Kinesis Client Library to deliver the data to the S3 bucket. Use a 5 second buffer interval from an application. D. Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) and Amazon Kinesis Data Firehose to deliver the data to the S3 bucket. Use a 5 second buffer interval for Kinesis Data Firehose. https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 17/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Question #98 Topic 1 A company wants to use machine learning (ML) to perform analytics on data that is in an Amazon S3 data lake. The company has two data transformation requirements that will give consumers within the company the ability to create reports. The company must perform daily transformations on 300 GB of data that is in a variety format that must arrive in Amazon S3 at a scheduled time. The company must perform one-time transformations of terabytes of archived data that is in the S3 data lake. The company uses Amazon Managed Workflows for Apache Airflow (Amazon MWAA) Directed Acyclic Graphs (DAGs) to orchestrate processing. Which combination of tasks should the company schedule in the Amazon MWAA DAGs to meet these requirements MOST cost-effectively? (Choose two.) A. For daily incoming data, use AWS Glue crawlers to scan and identify the schema. B. For daily incoming data, use Amazon Athena to scan and identify the schema. C. For daily incoming data, use Amazon Redshift to perform transformations. D. For daily and archived data, use Amazon EMR to perform data transformations. E. For archived data, use Amazon SageMaker to perform data transformations. Question #99 Topic 1 A retail company uses AWS Glue for extract, transform, and load (ETL) operations on a dataset that contains information about customer orders. The company wants to implement specific validation rules to ensure data accuracy and consistency. Which solution will meet these requirements? A. Use AWS Glue job bookmarks to track the data for accuracy and consistency. B. Create custom AWS Glue Data Quality rulesets to define specific data quality checks. C. Use the built-in AWS Glue Data Quality transforms for standard data quality validations. D. Use AWS Glue Data Catalog to maintain a centralized data schema and metadata repository. Question #100 Topic 1 An insurance company stores transaction data that the company compressed with gzip. The company needs to query the transaction data for occasional audits. Which solution will meet this requirement in the MOST cost-effective way? A. Store the data in Amazon Glacier Flexible Retrieval. Use Amazon S3 Glacier Select to query the data. B. Store the data in Amazon S3. Use Amazon S3 Select to query the data. C. Store the data in Amazon S3. Use Amazon Athena to query the data. D. Store the data in Amazon Glacier Instant Retrieval. Use Amazon Athena to query the data. Previous Questions Next Questions https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 18/19 10/10/24, 6:53 PM AWS Certified Data Engineer - Associate DEA-C01 Exam - Free Exam Q&As, Page 2 | ExamTopics Get IT Certification Unlock free, top-quality video courses on ExamTopics with a simple registration. Elevate your learning journey with our expertly curated content. Register now to access a diverse range of educational resources designed for your success. Start learning today with ExamTopics! Start Learning for free https://www.examtopics.com/exams/amazon/aws-certified-data-engineer-associate-dea-c01/view/2/ 19/19