🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Expert Veri ed, Online, Free. Custom View Settings Topic 1 - Question Set 1 Question #1...

Expert Veri ed, Online, Free. Custom View Settings Topic 1 - Question Set 1 Question #1 Topic 1 You have 20 Azure SQL databases provisioned by using the vCore purchasing model. You plan to create an Azure SQL Database elastic pool and add the 20 databases. Which three metrics should you use to size the elastic pool to meet the demands of your workload? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. total size of all the databases B. geo-replication support C. number of concurrently peaking databases * peak CPU utilization per database D. maximum number of concurrent sessions for all the databases E. total number of databases * average CPU utilization per database Correct Answer: ACE CE: Estimate the vCores needed for the pool as follows: For vCore-based purchasing model: MAX(, = 80 Incorrect Answers: sys.dm_exec_requests: sys.dm_exec_requests returns information about each request that is executing in SQL Server. It does not have a column named database_name. sys.dm_db_resource_stats: sys.dm_db_resource_stats does not have any start_time column. Note: sys.dm_db_resource_stats returns CPU, I/O, and memory consumption for an Azure SQL Database database. One row exists for every 15 seconds, even if there is no activity in the database. Historical data is maintained for approximately one hour. Sys.dm_user_db_resource_governance returns actual con guration and capacity settings used by resource governance mechanisms in the current database or elastic pool. It does not have any start_time column. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database Question #4 Topic 3 You have 50 Azure SQL databases. You need to notify the database owner when the database settings, such as the database size and pricing tier, are modi ed in Azure. What should you do? A. Create a diagnostic setting for the activity log that has the Security log enabled. B. For the database, create a diagnostic setting that has the InstanceAndAppAdvanced metric enabled. C. Create an alert rule that uses a Metric signal type. D. Create an alert rule that uses an Activity Log signal type. Correct Answer: D Activity log events - An alert can trigger on every event, or, only when a certain number of events occur. The activity log of a database logs the change for the SKU (Stock-keeping-Unit) change. Incorrect Answers: C: Metric values - The alert triggers when the value of a speci ed metric crosses a threshold you assign in either direction. That is, it triggers both when the condition is rst met and then afterwards when that condition is no longer being met. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/alerts-insights-con gure-portal 84/281 Question #5 Topic 3 You have several Azure SQL databases on the same Azure SQL Database server in a resource group named ResourceGroup1. You must be alerted when CPU usage exceeds 80 percent for any database. The solution must apply to any additional databases that are created on the Azure SQL server. Which resource type should you use to create the alert? A. Resource Groups B. SQL Servers C. SQL Databases D. SQL Virtual Machines Correct Answer: C There are resource types related to application code, compute infrastructure, networking, storage + databases. You can deploy up to 800 instances of a resource type in each resource group. Some resources can exist outside of a resource group. These resources are deployed to the subscription, management group, or tenant. Only speci c resource types are supported at these scopes. Reference: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types Question #6 Topic 3 You have SQL Server 2019 on an Azure virtual machine that runs Windows Server 2019. The virtual machine has 4 vCPUs and 28 GB of memory. You scale up the virtual machine to 8 vCPUSs and 64 GB of memory. You need to provide the lowest latency for tempdb. What is the total number of data les that tempdb should contain? A. 2 B. 4 C. 8 D. 64 Correct Answer: C The number of les depends on the number of (logical) processors on the machine. As a general rule, if the number of logical processors is less than or equal to eight, use the same number of data les as logical processors. If the number of logical processors is greater than eight, use eight data les and then if contention continues, increase the number of data les by multiples of 4 until the contention is reduced to acceptable levels or make changes to the workload/code. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database 85/281 Question #7 Topic 3 You have SQL Server on an Azure virtual machine that contains a database named DB1. DB1 contains a table named CustomerPII. You need to record whenever users query the CustomerPII table. Which two options should you enable? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. server audit speci cation B. SQL Server audit C. database audit speci cation D. a server principal Correct Answer: BC Auditing an instance of SQL Server or a SQL Server database involves tracking and logging events that occur on the system. The SQL Server Audit object collects a single instance of server-level or database-level actions and groups of actions to monitor. The audit is at the SQL Server instance level. You can have multiple audits per SQL Server instance. The Database-Level Audit Speci cation object belongs to an audit. You can create one database audit speci cation per SQL Server database per audit. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/security/auditing/create-a-server-audit-and-database-audit-speci cation Question #8 Topic 3 You have an Azure virtual machine based on a custom image named VM1. VM1 hosts an instance of Microsoft SQL Server 2019 Standard. You need to automate the maintenance of VM1 to meet the following requirements: ✑ Automate the patching of SQL Server and Windows Server. ✑ Automate full database backups and transaction log backups of the databases on VM1. ✑ Minimize administrative effort. What should you do rst? A. Enable a system-assigned managed identity for VM1 B. Register the Azure subscription to the Microsoft.Sql resource provider C. Install an Azure virtual machine Desired State Con guration (DSC) extension on VM1 D. Register the Azure subscription to the Microsoft.SqlVirtualMachine resource provider Correct Answer: D Automated Patching depends on the SQL Server infrastructure as a service (IaaS) Agent Extension. The SQL Server IaaS Agent Extension (SqlIaasExtension) runs on Azure virtual machines to automate administration tasks. The SQL Server IaaS extension is installed when you register your SQL Server VM with the SQL Server VM resource provider. To utilize the SQL IaaS Agent extension, you must rst register your subscription with the Microsoft.SqlVirtualMachine provider, which gives the SQL IaaS extension the ability to create resources within that speci c subscription. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm? tabs=bash%2Cazure-cli 86/281 Question #9 Topic 3 HOTSPOT - You are building an Azure Stream Analytics job to retrieve game data. You need to ensure that the job returns the highest scoring record for each ve-minute time interval of each game. How should you complete the Stream Analytics query? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: TopOne() OVER(PARTITION BY Game ORDER BY Score Desc) TopOne returns the top-rank record, where rank de nes the ranking position of the event in the window according to the speci ed ordering. Ordering/ranking is based on event columns and can be speci ed in ORDER BY clause. Analytic Function Syntax: TopOne() OVER ([] ORDER BY ( [ASC |DESC])+ []) Box 2: Tumbling(minute 5) Tumbling window functions are used to segment a data stream into distinct time segments and perform a function against them, such as the example below. The key differentiators of a Tumbling window are that they repeat, do not overlap, and an event cannot belong to more than one tumbling window. 87/281 Reference: https://docs.microsoft.com/en-us/stream-analytics-query/topone-azure-stream-analytics https://github.com/MicrosoftDocs/azure- docs/blob/master/articles/stream-analytics/stream-analytics-window-functions.md Question #10 Topic 3 A company plans to use Apache Spark analytics to analyze intrusion detection data. You need to recommend a solution to analyze network and system activity data for malicious activities and policy violations. The solution must minimize administrative efforts. What should you recommend? A. Azure Data Lake Storage B. Azure Databricks C. Azure HDInsight D. Azure Data Factory Correct Answer: B Azure DataBricks does have integration with Azure Monitor. Application logs and metrics from Azure Databricks can be send to a Log Analytics workspace. Reference: https://docs.microsoft.com/en-us/azure/architecture/databricks-monitoring/application-logs 88/281 Question #11 Topic 3 DRAG DROP - Your company analyzes images from security cameras and sends alerts to security teams that respond to unusual activity. The solution uses Azure Databricks. You need to send Apache Spark level events, Spark Structured Streaming metrics, and application metrics to Azure Monitor. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions in the answer area and arrange them in the correct order. Select and Place: Correct Answer: Send application metrics using Dropwizard. Spark uses a con gurable metrics system based on the Dropwizard Metrics Library. To send application metrics from Azure Databricks application code to Azure Monitor, follow these steps: Step 1: Con gure your Azure Databricks cluster to use the Databricksmonitoring library. Prerequisite: Con gure your Azure Databricks cluster to use the monitoring library. Step 2: Build the spark-listeners-loganalytics-1.0-SNAPSHOT.jar JAR le Step 3: Create Dropwizard counters in your application code Create Dropwizard gauges or counters in your application code Reference: https://docs.microsoft.com/en-us/azure/architecture/databricks-monitoring/application-logs 89/281 Question #12 Topic 3 You have an Azure data solution that contains an enterprise data warehouse in Azure Synapse Analytics named DW1. Several users execute adhoc queries to DW1 concurrently. You regularly perform automated data loads to DW1. You need to ensure that the automated data loads have enough memory available to complete quickly and successfully when the adhoc queries run. What should you do? A. Assign a smaller resource class to the automated data load queries. B. Create sampled statistics to every column in each table of DW1. C. Assign a larger resource class to the automated data load queries. D. Hash distribute the large fact tables in DW1 before performing the automated data loads. Correct Answer: C The performance capacity of a query is determined by the user's resource class. Smaller resource classes reduce the maximum memory per query, but increase concurrency. Larger resource classes increase the maximum memory per query, but reduce concurrency. Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/resource-classes-for-workload-management Question #13 Topic 3 You are monitoring an Azure Stream Analytics job. You discover that the Backlogged input Events metric is increasing slowly and is consistently non-zero. You need to ensure that the job can handle all the events. What should you do? A. Remove any named consumer groups from the connection and use $default. B. Change the compatibility level of the Stream Analytics job. C. Create an additional output stream for the existing input stream. D. Increase the number of streaming units (SUs). Correct Answer: D Backlogged Input Events: Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently non-zero, you should scale out your job, by increasing the SUs. Reference: https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-monitoring 90/281 Question #14 Topic 3 You have an Azure Stream Analytics job. You need to ensure that the job has enough streaming units provisioned. You con gure monitoring of the SU % Utilization metric. Which two additional metrics should you monitor? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Late Input Events B. Out of order Events C. Backlogged Input Events D. Watermark Delay E. Function Events Correct Answer: CD To react to increased workloads and increase streaming units, consider setting an alert of 80% on the SU Utilization metric. Also, you can use watermark delay and backlogged events metrics to see if there is an impact. Note: Backlogged Input Events: Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently non-zero, you should scale out your job, by increasing the SUs. Reference: https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-monitoring Question #15 Topic 3 You have an Azure Databricks resource. You need to log actions that relate to changes in compute for the Databricks resource. Which Databricks services should you log? A. clusters B. jobs C. DBFS D. SSH E. workspace Correct Answer: A Clusters logs include information regarding changes in compute. Incorrect: Not E: Workspace logs do not include information related to changes in compute. Reference: https://docs.microsoft.com/en-us/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs#con gure-diagnostic-log- delivery 91/281 Question #16 Topic 3 Your company uses Azure Stream Analytics to monitor devices. The company plans to double the number of devices that are monitored. You need to monitor a Stream Analytics job to ensure that there are enough processing resources to handle the additional load. Which metric should you monitor? A. Input Deserialization Errors B. Late Input Events C. Early Input Events D. Watermark delay Correct Answer: D The Watermark delay metric is computed as the wall clock time of the processing node minus the largest watermark it has seen so far. The watermark delay metric can rise due to: 1. Not enough processing resources in Stream Analytics to handle the volume of input events. 2. Not enough throughput within the input event brokers, so they are throttled. 3. Output sinks are not provisioned with enough capacity, so they are throttled. Reference: https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-time-handling Question #17 Topic 3 You manage an enterprise data warehouse in Azure Synapse Analytics. Users report slow performance when they run commonly used queries. Users do not report performance changes for infrequently used queries. You need to monitor resource utilization to determine the source of the performance issues. Which metric should you monitor? A. Local tempdb percentage B. DWU percentage C. Data Warehouse Units (DWU) used D. Cache hit percentage Correct Answer: D You can use Azure Monitor to view cache metrics to troubleshoot query performance. The key metrics for troubleshooting the cache are Cache hit percentage and Cache used percentage. Possible scenario: Your current working data set cannot t into the cache which causes a low cache hit percentage due to physical reads. Consider scaling up your performance level and rerun your workload to populate the cache. Reference: https://docs.microsoft.com/da-dk/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache 92/281 Question #18 Topic 3 You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and a database named DB1. DB1 contains a fact table named Table. You need to identify the extent of the data skew in Table1. What should you do in Synapse Studio? A. Connect to Pool1 and query sys.dm_pdw_nodes_db_partition_stats. B. Connect to the built-in pool and run DBCC CHECKALLOC. C. Connect to Pool1 and run DBCC CHECKALLOC. D. Connect to the built-in pool and query sys.dm_pdw_nodes_db_partition_stats. Correct Answer: A First connect to Pool1, not the built-in serverless pool, then use sys.dm_pdw_nodes_db_partition_stats to analyze any skewness in the data. Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/cheat-sheet 93/281 Question #19 Topic 3 You have an Azure Synapse Analytics dedicated SQL pool. You run PDW_SHOWSPACEUSED('dbo.FactInternetSales'); and get the results shown in the following table. Which statement accurately describes the dbo.FactInternetSales table? A. The table contains less than 10,000 rows. B. All distributions contain data. C. The table uses round-robin distribution D. The table is skewed. Correct Answer: D The rows per distribution can vary up to 10% without a noticeable impact on performance. Here the distribution varies more than 10%. It is skewed. Note: SHOWSPACEUSED displays the number of rows, disk space reserved, and disk space used for a speci c table, or for all tables in a Azure Synapse Analytics or Parallel Data Warehouse database. This is a very quick and simple way to see the number of table rows that are stored in each of the 60 distributions of your database. Remember that for the most balanced performance, the rows in your distributed table should be spread evenly across all the distributions. ROUND_ROBIN distributed tables should not be skewed. Data is distributed evenly across the nodes by design. Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute https://github.com/rgl/azure-content/blob/master/articles/sql-data-warehouse/sql-data-warehouse-manage-distributed-data-skew.md 94/281 Question #20 Topic 3 DRAG DROP - You have an Azure SQL managed instance named SQLMI1 that has Resource Governor enabled and is used by two apps named App1 and App2. You need to con gure SQLMI1 to limit the CPU and memory resources that can be allocated to App1. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Correct Answer: Reference: https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor?view=sql-server-ver15 https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/create-and-test-a-classi er-user-de ned-function?view=sql- server-ver15 95/281 Question #21 Topic 3 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have SQL Server 2019 on an Azure virtual machine. You are troubleshooting performance issues for a query in a SQL Server instance. To gather more information, you query sys.dm_exec_requests and discover that the wait type is PAGELATCH_UP and the wait_resource is 2:3:905856. You need to improve system performance. Solution: You shrink the transaction log le. Does this meet the goal? A. Yes B. No Correct Answer: B You should instead reduce the use of table variables and temporary tables. Or you could create additional tempdb les. Note: The following operations use tempdb extensively: * Repetitive create-and-drop operation of temporary tables (local or global). * Table variables that use tempdb for storage. * Etc. Reference: https://docs.microsoft.com/en-US/troubleshoot/sql/performance/recommendations-reduce-allocation-contention Question #22 Topic 3 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have SQL Server 2019 on an Azure virtual machine. You are troubleshooting performance issues for a query in a SQL Server instance. To gather more information, you query sys.dm_exec_requests and discover that the wait type is PAGELATCH_UP and the wait_resource is 2:3:905856. You need to improve system performance. Solution: You change the data le for the master database to autogrow by 10 percent. Does this meet the goal? A. Yes B. No Correct Answer: B You should instead reduce the use of table variables and temporary tables. Or you could create additional tempdb les Note: The following operations use tempdb extensively: * Repetitive create-and-drop operation of temporary tables (local or global). * Table variables that use tempdb for storage. * Etc. Reference: https://docs.microsoft.com/en-US/troubleshoot/sql/performance/recommendations-reduce-allocation-contention 96/281 Question #23 Topic 3 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have SQL Server 2019 on an Azure virtual machine. You are troubleshooting performance issues for a query in a SQL Server instance. To gather more information, you query sys.dm_exec_requests and discover that the wait type is PAGELATCH_UP and the wait_resource is 2:3:905856. You need to improve system performance. Solution: You reduce the use of table variables and temporary tables. Does this meet the goal? A. Yes B. No Correct Answer: A The following operations use tempdb extensively: * Repetitive create-and-drop operation of temporary tables (local or global). * Table variables that use tempdb for storage. * Etc. Reference: https://docs.microsoft.com/en-US/troubleshoot/sql/performance/recommendations-reduce-allocation-contention Question #24 Topic 3 You have an Azure SQL database named db1 on a server named server1. You need to modify the MAXDOP settings for db1. What should you do? A. Connect to db1 and run the sp_con gure command. B. Connect to the master database of server1 and run the sp_con gure command. C. Con gure the extended properties of db1. D. Modify the database scoped con guration of db1. Correct Answer: D If you determine that a MAXDOP setting different from the default is optimal for your Azure SQL Database workload, you can use the ALTER DATABASE SCOPED CONFIGURATION T-SQL statement. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/con gure-max-degree-of-parallelism 97/281 Question #25 Topic 3 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have SQL Server 2019 on an Azure virtual machine. You are troubleshooting performance issues for a query in a SQL Server instance. To gather more information, you query sys.dm_exec_requests and discover that the wait type is PAGELATCH_UP and the wait_resource is 2:3:905856. You need to improve system performance. Solution: You create additional tempdb les. Does this meet the goal? A. Yes B. No Correct Answer: A To improve the concurrency of tempdb, try the following methods: * Increase the number of data les in tempdb to maximize disk bandwidth and reduce contention in allocation structures. * Etc. Note: Symptoms - On a server that is running Microsoft SQL Server, you notice severe blocking when the server is experiencing a heavy load. Dynamic Management Views [sys.dm_exec_request or sys.dm_os_waiting_tasks] indicates that these requests or tasks are waiting for tempdb resources. Additionally, the wait type is PAGELATCH_UP, and the wait resource points to pages in Tempdb. Reference: https://docs.microsoft.com/en-US/troubleshoot/sql/performance/recommendations-reduce-allocation-contention Question #26 Topic 3 You have SQL Server on an Azure virtual machine. You need to add a 4-TB volume that meets the following requirements: ✑ Maximizes IOPs ✑ Uses premium solid state drives (SSDs) What should you do? A. Attach two mirrored 4-TB SSDs. B. Attach a stripe set that contains four 1-TB SSDs. C. Attach a RAID-5 array that contains ve 1-TB SSDs. D. Attach a single 4-TB SSD. Correct Answer: B For more throughput, you can add additional data disks and use disk striping. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/storage-con guration?tabs=windows2016 98/281 Question #27 Topic 3 You have an Azure SQL database named db1 on a server named server1. The Intelligent Insights diagnostics log identi es that several tables are missing indexes. You need to ensure that indexes are created for the tables. What should you do? A. Run the DBCC SQLPERF command. B. Run the DBCC DBREINDEX command. C. Modify the automatic tuning settings for db1. D. Modify the Query Store settings for db1. Correct Answer: C Automatic tuning is a fully managed intelligent performance service that uses built-in intelligence to continuously monitor queries executed on a database, and it automatically improves their performance. Automatic tuning for Azure SQL Database uses the CREATE INDEX, DROP INDEX, and FORCE LAST GOOD PLAN database advisor recommendations to optimize your database performance. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/automatic-tuning-overview Question #28 Topic 3 You have an Azure SQL managed instance named SQL1 and two Azure web apps named App1 and App2. You need to limit the number of IOPs that App2 queries generate on SQL1. Which two actions should you perform on SQL1? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Enable query optimizer xes. B. Enable Resource Governor. C. Enable parameter sni ng. D. Create a workload group. E. Con gure In-memory OLTP. F. Run the Database Engine Tuning Advisor. G. Reduce the Max Degree of Parallelism value. Correct Answer: BD SQL Server Resource Governor is a feature that you can use to manage SQL Server workload and system resource consumption. Resource Governor enables you to specify limits on the amount of CPU, physical I/O, and memory that incoming application requests can use. The following concept is fundamental to understanding and using Resource Governor: * Workload groups. A workload group serves as a container for session requests that have similar classi cation criteria. A workload allows for aggregate monitoring of the sessions, and de nes policies for the sessions. Each workload group is in a resource pool. Two workload groups (internal and default) are created and mapped to their corresponding resource pools when SQL Server is installed. Resource Governor also supports user-de ned workload groups. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor?view=sql-server-ver15 99/281 Question #29 Topic 3 You have an Azure SQL database named db1 on a server named server1. The Intelligent Insights diagnostics log identi es queries that cause performance issues due to tempDB contention. You need to resolve the performance issues. What should you do? A. Implement memory-optimized tables. B. Run the DBCC FLUSHPROCINDB command. C. Replace the sequential index keys with nonsequential keys. D. Run the DBCC DBREINDEX command. Correct Answer: A TempDB contention troubleshooting: The diagnostics log outputs tempDB contention details. You can use the information as the starting point for troubleshooting. There are two things you can pursue to alleviate this kind of contention and increase the throughput of the overall workload: You can stop using the temporary tables. You also can use memory- optimized tables. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/intelligent-insights-troubleshoot-performance#tempdb-contention 100/281 Question #30 Topic 3 HOTSPOT - You have an Azure subscription that contains an Azure SQL database. The database fails to respond to queries in a timely manner. You need to identify whether the issue relates to resource_semaphore waits. How should you complete the Transact-SQL query? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: If your top wait type is RESOURCE_SEMAHPORE and you don't have a high CPU usage issue, you may have a memory grant waiting issue. Determine if a RESOURCE_SEMAHPORE wait is a top wait Use the following query to determine if a RESOURCE_SEMAHPORE wait is a top wait SELECT wait_type, SUM(wait_time) AS total_wait_time_ms FROM sys.dm_exec_requests AS req 101/281 JOIN sys.dm_exec_sessions AS sess ON req.session_id = sess.session_id WHERE is_user_process = 1 - GROUP BY wait_type - ORDER BY SUM(wait_time) DESC; Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/monitoring-with-dmvs Question #31 Topic 3 You have SQL Server 2019 on an Azure virtual machine that runs Windows Server 2019. The virtual machine has 4 vCPUs and 28 GB of memory. You scale up the virtual machine to 8 vCPUs and 64 GB of memory. You need to reduce tempdb contention without negatively affecting server performance. What is the number of secondary data les that you should con gure for tempdb? A. 2 B. 4 C. 8 D. 64 Correct Answer: C The number of secondary data les depends on the number of (logical) processors on the machine. As a general rule, if the number of logical processors is less than or equal to eight, use the same number of data les as logical processors. If the number of logical processors is greater than eight, use eight data les. Then if contention continues, increase the number of data les by multiples of four until the contention decreases to acceptable levels, or make changes to the workload/code. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database 102/281 Question #32 Topic 3 You receive numerous alerts from Azure Monitor for an Azure SQL Database instance. You need to reduce the number of alerts. You must only receive alerts if there is a signi cant change in usage patterns for an extended period. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Set Threshold Sensitivity to High B. Set the Alert logic threshold to Dynamic C. Set the Alert logic threshold to Static D. Set Threshold Sensitivity to Low E. Set Force Plan to On Correct Answer: BD B: Dynamic Thresholds continuously learns the data of the metric series and tries to model it using a set of algorithms and methods. It detects patterns in the data such as seasonality (Hourly / Daily / Weekly), and is able to handle noisy metrics (such as machine CPU or memory) as well as metrics with low dispersion (such as availability and error rate). D: Alert threshold sensitivity is a high-level concept that controls the amount of deviation from metric behavior required to trigger an alert. Low ‫ג‬€" The thresholds will be loose with more distance from metric series pattern. An alert rule will only trigger on large deviations, resulting in fewer alerts. Incorrect Answers: A: High ‫ג‬€" The thresholds will be tight and close to the metric series pattern. An alert rule will be triggered on the smallest deviation, resulting in more alerts. Reference: https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-dynamic-thresholds Question #33 Topic 3 You have an Azure SQL database named sqldb1. You need to minimize the amount of space by the data and log les of sqldb1. What should you run? A. DBCC SHRINKDATABASE B. sp_clean_db_free_space C. sp_clean_db_ le_free_space D. DBCC SHRINKFILE Correct Answer: A DBCC SHRINKDATABASE shrinks the size of the data and log les in the speci ed database. Incorrect Answers: D: To shrink one data or log le at a time for a speci c database, execute the DBCC SHRINKFILE command. Reference: https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql 103/281 Question #34 Topic 3 You have an Azure SQL Database server named sqlsrv1 that hosts 10 Azure SQL databases. The databases perform slower than expected. You need to identify whether the performance issue relates to the use of tempdb by Azure SQL databases in sqlsrv1. What should you do? A. Run Query Store-based queries B. Review information provided by SQL Server Pro ler-based traces C. Review information provided by Query Performance Insight D. Run dynamic management view-based queries Correct Answer: D Microsoft Azure SQL Database and Azure SQL Managed Instance enable a subset of dynamic management views to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on. This include edentifying tempdb performance issues: When identifying IO performance issues, the top wait types associated with tempdb issues is PAGELATCH_* (not PAGEIOLATCH_*). Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/monitoring-with-dmvs 104/281 Question #35 Topic 3 DRAG DROP - You are building an Azure virtual machine. You allocate two 1-TiB, P30 premium storage disks to the virtual machine. Each disk provides 5,000 IOPS. You plan to migrate an on-premises instance of Microsoft SQL Server to the virtual machine. The instance has a database that contains a 1.2-TiB data le. The database requires 10,000 IOPS. You need to con gure storage for the virtual machine to support the database. Which three objects should you create in sequence? To answer, move the appropriate objects from the list of objects to the answer area and arrange them in the correct order. Select and Place: Correct Answer: Follow these same steps to create striped virtual disk: ✑ Create Log Storage Pool. ✑ Create Virtual Disk ✑ Create Volume Box 1: a storage pool - Box 2: a virtual disk that uses stripe layout Disk Striping: Use multiple disks and stripe them together to get a combined higher IOPS and Throughput limit. The combined limit per VM should be higher than the combined limits of attached premium disks. Box 3: a volume - Reference: https://hanu.com/hanu-how-to-striping-of-disks-for-azure-sql-server/ 105/281 Question #36 Topic 3 HOTSPOT - You need to use an Azure Resource Manager (ARM) template to deploy an Azure virtual machine that will host a Microsoft SQL Server instance. The solution must maximize disk I/O performance for the SQL Server database and log les. How should you complete the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Correct Answer: 106/281 Question #37 Topic 3 You have an Azure SQL managed instance named MI1. You need to implement automatic tuning for the databases of MI1. What should you do? A. Use The REST API to call the patch operation and modify the AutomaticTuningServerMode property. B. From the Azure portal, con gure automatic tuning. C. Use Transact-SQL to enable the FORCE_LAST_GOOD_PLAN option. Correct Answer: C Question #38 Topic 3 You have an Azure subscription that contains an Azure SQL database named db1. You need to implement SQL insights for db1. Which two resources should you create rst? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. a storage account B. a virtual machine C. an Azure logic app D. an Azure function E. a Log Analytics workspace Correct Answer: BE 107/281 Question #39 Topic 3 You have an Azure subscription that contains the resources shown in the following table. App1 experiences transient connection errors and timeouts when it attempts to access db1 after extended periods of inactivity. You need to modify db1 to resolve the issues experienced by App1 as soon as possible, without considering immediate costs. What should you do? A. Enable automatic tuning for db1. B. Increase the number of vCores allocated to db1. C. Decrease the auto-pause delay for db1. D. Disable auto-pause delay for db1. Correct Answer: D 108/281 Question #40 Topic 3 HOTSPOT - You have an Azure SQL database named DB1 that contains a table named Table1. You run a query to load data into Table1. The performance metrics of Table1 during the load operation are shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. 109/281 Correct Answer: Question #41 Topic 3 DRAG DROP - You have a database named db1. The log for db1 contains the following entry. You need to ensure that db1 can process transactions. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Correct Answer: 110/281 Question #42 Topic 3 You have the following resources: 15 SQL Server on Azure Virtual Machines instances 20 Azure SQL databases You need to recommend a solution to centrally monitor the resources for security vulnerabilities. What should you include in the recommendation? A. database audits B. Microsoft Defender C. SQL insights D. Azure SQL Auditing Correct Answer: C 111/281 Question #43 Topic 3 HOTSPOT - You have an Azure subscription that contains an instance of SQL Server on Azure Virtual Machines named SQLVM1 and a user named User1. SQLVM1 hosts a database named DB1. You need to ensure that User1 can perform the following tasks on DB1: Create jobs. View all jobs. Modify, delete, and disable the jobs the user created. The solution must use the principle of least privilege. Which built-in database role should you assign to User1, and where is the role de ned? To answer, select the appropriate options in the answer area. Correct Answer: 112/281 Question #44 Topic 3 You have an Azure subscription that contains an Azure SQL managed instance named SQLMI1 and a Log Analytics workspace named Workspace1. You need to collect performance metrics for SQLMI1 and stream the metrics to Workspace. What should you do rst? A. Create a private endpoint connection on SQLMI1. B. Con gure Azure SQL Analytics to use Workspace1. C. Modify the Compute + storage settings for SQLMI1. D. Modify the diagnostic settings for SQLMI1. Correct Answer: D 113/281 Question #45 Topic 3 HOTSPOT - You have an Azure SQL database named DB1 in the General Purpose service tier. You need to monitor DB1 by using SQL Insights. What should you include in the solution? To answer, select the appropriate options in the answer area. Correct Answer: 114/281 Question #46 Topic 3 You have an Azure SQL database named DB1 in the General Purpose service tier. The performance metrics for DB1 are shown in the following exhibit. You need to reduce the Log IO percentage. The solution must minimize costs. What should you do? A. Change Service tier to Business Critical. B. Increase the number of vCores. C. Perform a checkpoint operation. D. Change Recovery model to Simple. Correct Answer: A Question #47 Topic 3 You have an Azure SQL database named DB1 that contains a nonclustered index named index1. End users report slow queries when they use index1. You need to identify the operations that are being performed on the index. Which dynamic management view should you use? A. Sys.dm_exec_query_plan_stats B. Sys.dm_db_index_physical_stats C. Sys.dm_db_index_operational_stats D. Sys.dm_db_index_useage_stats Correct Answer: D 115/281 Question #48 Topic 3 HOTSPOT - You have an Azure SQL managed instance named SQLMI1 that hosts multiple databases. You need to monitor the performance of SQLMI1 and identify which database uses the most memory and the most disk I/O. Which objects should you query? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Correct Answer: 116/281 Topic 4 - Question Set 4 Question #1 Topic 4 You have SQL Server on an Azure virtual machine that contains a database named DB1. You have an application that queries DB1 to generate a sales report. You need to see the parameter values from the last time the query was executed. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Enable Last_Query_Plan_Stats in the master database B. Enable Lightweight_Query_Pro ling in DB1 C. Enable Last_Query_Plan_Stats in DB1 D. Enable Lightweight_Query_Pro ling in the master database E. Enable PARAMETER_SNIFFING in DB1 Correct Answer: BC Last_Query_Plan_Stats allows you to enable or disable collection of the last query plan statistics (equivalent to an actual execution plan) in sys.dm_exec_query_plan_stats. Lightweight pro ling can be disabled at the database level using the LIGHTWEIGHT_QUERY_PROFILING database scoped con guration: ALTER DATABASE SCOPED CONFIGURATION SET LIGHTWEIGHT_QUERY_PROFILING = OFF;. Incorrect Answers: A: Enable it for DB1, not for the master database. E: Parameter sensitivity, also known as "parameter sni ng", refers to a process whereby SQL Server "sniffs" the current parameter values during compilation or recompilation, and passes it along to the Query Optimizer so that they can be used to generate potentially more e cient query execution plans. Parameter values are sniffed during compilation or recompilation for the following types of batches: ✑ Stored procedures ✑ Queries submitted via sp_executesql Prepared queries - Reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-pro ling-infrastructure https://docs.microsoft.com/en- us/sql/relational-databases/performance/query-pro ling-infrastructure 117/281 Question #2 Topic 4 HOTSPOT - You have SQL Server on an Azure virtual machine that contains a database named Db1. You need to enable automatic tuning for Db1. How should you complete the statements? To answer, select the appropriate answer in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: SET QUERY_STORE = ON (OPERATION MODE = READ_WRITE); Must enable the Query Store. Incorrect: If the server may be Azure SQL or Managed Instance then the response should be SET AUTOMATIC_TUNNIG=AUTO, but as it is a SQL server the Query store needs to be rst enabled. Box 2: SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN = ON) To con gure individual automatic tuning options via T-SQL, connect to the database and execute the query such as this one: ALTER DATABASE current SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN = ON) 118/281 Setting the individual tuning option to ON will override any setting that database inherited and enable the tuning option. Setting it to OFF will also override any setting that database inherited and disable the tuning option. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/automatic-tuning/automatic-tuning https://docs.microsoft.com/en-us/azure/azure- sql/database/automatic-tuning-enable Question #3 Topic 4 You deploy a database to an Azure SQL Database managed instance. You need to prevent read queries from blocking queries that are trying to write to the database. Which database option should set? A. PARAMETERIZATION to FORCED B. PARAMETERIZATION to SIMPLE C. Delayed Durability to Forced D. READ_COMMITTED_SNAPSHOT to ON Correct Answer: D In SQL Server, you can also minimize locking contention while protecting transactions from dirty reads of uncommitted data modi cations using either: ✑ The READ COMMITTED isolation level with the READ_COMMITTED_SNAPSHOT database option set to ON. ✑ The SNAPSHOT isolation level. If READ_COMMITTED_SNAPSHOT is set to ON (the default on SQL Azure Database), the Database Engine uses row versioning to present each statement with a transactionally consistent snapshot of the data as it existed at the start of the statement. Locks are not used to protect the data from updates by other transactions. Incorrect Answers: A: When the PARAMETERIZATION database option is set to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. This means that any literal values that are contained in a query are substituted with parameters. This process is referred to as simple parameterization. When SIMPLE parameterization is in effect, you cannot control which queries are parameterized and which queries are not. B: You can specify that all queries in a database be parameterized by setting the PARAMETERIZATION database option to FORCED. This process is referred to as forced parameterization. C: Delayed transaction durability is accomplished using asynchronous log writes to disk. Transaction log records are kept in a buffer and written to disk when the buffer lls or a buffer ushing event takes place. Delayed transaction durability reduces both latency and contention within the system. Some of the cases in which you could bene t from using delayed transaction durability are: ✑ You can tolerate some data loss. ✑ You are experiencing a bottleneck on transaction log writes. ✑ Your workloads have a high contention rate. Reference: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql 119/281 Question #4 Topic 4 You have an Azure SQL database. You discover that the plan cache is full of compiled plans that were used only once. You run the select * from sys.database_scoped_con gurations Transact-SQL command and receive the results shown in the following table. You need relieve the memory pressure. What should you con gure? A. LEGACY_CARDINALITY_ESTIMATION B. QUERY_OPTIMIZER_HOTFIXES C. OPTIMIZE_FOR_AD_HOC_WORKLOADS D. ACCELERATED_PLAN_FORCING Correct Answer: C OPTIMIZE_FOR_AD_HOC_WORKLOADS = { ON | OFF } Enables or disables a compiled plan stub to be stored in cache when a batch is compiled for the rst time. The default is OFF. Once the database scoped con guration OPTIMIZE_FOR_AD_HOC_WORKLOADS is enabled for a database, a compiled plan stub will be stored in cache when a batch is compiled for the rst time. Plan stubs have a smaller memory footprint compared to the size of the full compiled plan. Incorrect Answers: A: LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY } Enables you to set the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version independent of the compatibility level of the database. The default is OFF, which sets the query optimizer cardinality estimation model based on the compatibility level of the database. B: QUERY_OPTIMIZER_HOTFIXES = { ON | OFF | PRIMARY } Enables or disables query optimization hot xes regardless of the compatibility level of the database. The default is OFF, which disables query optimization hot xes that were released after the highest available compatibility level was introduced for a speci c version (post-RTM). Reference: https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-database-scoped-con guration-transact-sql 120/281 Question #5 Topic 4 You have SQL Server on an Azure virtual machine that contains a database named DB1. You view a plan summary that shows the duration in milliseconds of each execution of query 1178902 as shown in the following exhibit: What should you do to ensure that the query uses the execution plan which executes in the least amount of time? A. Force the query execution plan for plan 1221065. B. Run the DBCC FREEPROCCACHE command. C. Force the query execution plan for plan 1220917. D. Disable parameter sni ng. Correct Answer: A As per exhibit, the execution plan 1221065 has lower execution time compared to plan 1220917. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-usage-scenarios 121/281 Question #6 Topic 4 HOTSPOT - You have an Azure SQL database named DB1. The automatic tuning options for DB1 are con gured as shown in the following exhibit. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: No - By default CREATE INDEX is disabled. It is here con gured as INHERIT so it is disabled. Box 2: No - By default DROP INDEX is disabled. Box 3: Yes - FORCE LAST GOOD PLAN (automatic plan correction) - Identi es Azure SQL queries using an execution plan that is slower than the previous 122/281 good plan, and queries using the last known good plan instead of the regressed plan. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/automatic-tuning-enable Question #7 Topic 4 You have an Azure SQL database named DB1. You run a query while connected to DB1. You review the actual execution plan for the query, and you add an index to a table referenced by the query. You need to compare the previous actual execution plan for the query to the Live Query Statistics. What should you do rst in Microsoft SQL Server Management Studio (SSMS)? A. For DB1, set QUERY_CAPTURE_MODE of Query Store to All. B. Run the SET SHOWPLAN_ALL Transact-SQL statement. C. Save the actual execution plan. D. Enable Query Store for DB1. Correct Answer: C The Plan Comparison menu option allows side-by-side comparison of two different execution plans, for easier identi cation of similarities and changes that explain the different behaviors for all the reasons stated above. This option can compare between: Two previously saved execution plan les (.sqlplan extension). One active execution plan and one previously saved query execution plan. Two selected query plans in Query Store. Question #8 Topic 4 You have an Azure SQL database. Users report that the executions of a stored procedure are slower than usual. You suspect that a regressed query is causing the performance issue. You need to view the query execution plan to verify whether a regressed query is causing the issue. The solution must minimize effort. What should you use? A. Performance Recommendations in the Azure portal B. Extended Events in Microsoft SQL Server Management Studio (SSMS) C. Query Store in Microsoft SQL Server Management Studio (SSMS) D. Query Performance Insight in the Azure portal Correct Answer: C Use the Query Store Page in SQL Server Management Studio. Query performance regressions caused by execution plan changes can be non-trivial and time consuming to resolve. Since the Query Store retains multiple execution plans per query, it can enforce policies to direct the Query Processor to use a speci c execution plan for a query. This is referred to as plan forcing. Plan forcing in Query Store is provided by using a mechanism similar to the USE PLAN query hint, but it does not require any change in user applications. Plan forcing can resolve a query performance regression caused by a plan change in a very short period of time. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store 123/281 Question #9 Topic 4 You have an Azure SQL database. The database contains a table that uses a columnstore index and is accessed infrequently. You enable columnstore archival compression. What are two possible results of the con guration? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Queries that use the index will consume more disk I/O. B. Queries that use the index will retrieve fewer data pages. C. The index will consume more disk space. D. The index will consume more memory. E. Queries that use the index will consume more CPU resources. Correct Answer: BE For rowstore tables and indexes, use the data compression feature to help reduce the size of the database. In addition to saving space, data compression can help improve performance of I/O intensive workloads because the data is stored in fewer pages and queries need to read fewer pages from disk. Use columnstore archival compression to further reduce the data size for situations when you can afford extra time and CPU resources to store and retrieve the data. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/data-compression/data-compression Question #10 Topic 4 You are designing a dimension table in an Azure Synapse Analytics dedicated SQL pool. You need to create a surrogate key for the table. The solution must provide the fastest query performance. What should you use for the surrogate key? A. an IDENTITY column B. a GUID column C. a sequence object Correct Answer: A Dedicated SQL pool supports many, but not all, of the table features offered by other databases. Surrogate keys are not supported. Implement it with an Identity column. Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-overview 124/281 Question #11 Topic 4 You are designing a star schema for a dataset that contains records of online orders. Each record includes an order date, an order due date, and an order ship date. You need to ensure that the design provides the fastest query times of the records when querying for arbitrary date ranges and aggregating by scal calendar attributes. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Create a date dimension table that has a DateTime key. B. Create a date dimension table that has an integer key in the format of YYYYMMDD. C. Use built-in SQL functions to extract date attributes. D. Use integer columns for the date elds. E. Use DateTime columns for the date elds. Correct Answer: BD Why use a Date Dimension Table in a Data Warehouse. The Date dimension is one of these dimension tables related to the Fact. Here is a simple Data Diagram for a Data Mart of Internet Sales information for the Adventure Works DW database which can be obtained for free from CodePlex or other online sources. The relationship is created by the surrogate keys columns (integer data type) rather than the date data type. The query users have to write against a Data Mart are much simpler than against a transaction database. There are less joins because of the one to many relationships between the fact dimension table(s). The dimension tables are confusing to someone who has been normalizing databases as a career. The dimension is a attened or de-normalized table. This creates cases of duplicate data, but the simplistic query overrides the duplicate data in a dimensional model. Reference: https://www.mssqltips.com/sqlservertip/3117/de ning-role-playing-dimensions-for-sql-server-analysis-services/ https://community.idera.com/database-tools/blog/b/community_blog/posts/why-use-a-date-dimension-table-in-a-data-warehouse 125/281 Question #12 Topic 4 HOTSPOT - You are designing an enterprise data warehouse in Azure Synapse Analytics that will store website tra c analytics in a star schema. You plan to have a fact table for website visits. The table will be approximately 5 GB. You need to recommend which distribution type and index type to use for the table. The solution must provide the fastest query performance. What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: Hash - Consider using a hash-distributed table when: The table size on disk is more than 2 GB. The table has frequent insert, update, and delete operations. Box 2: Clustered columnstore - Clustered columnstore tables offer both the highest level of data compression and the best overall query performance. Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-index 126/281 Question #13 Topic 4 You have an Azure Data Factory pipeline that is triggered hourly. The pipeline has had 100% success for the past seven days. The pipeline execution fails, and two retries that occur 15 minutes apart also fail. The third failure returns the following error. What is a possible cause of the error? A. From 06:00 to 07:00 on January 10, 2021, there was no data in wwi/BIKES/CARBON. B. The parameter used to generate year=2021/month=01/day=10/hour=06 was incorrect. C. From 06:00 to 07:00 on January 10, 2021, the le format of data in wwi/BIKES/CARBON was incorrect. D. The pipeline was triggered too early. Correct Answer: B A le is missing. Incorrect: Not A, not C, not D: Time of the error is 07:45. 127/281 Question #14 Topic 4 HOTSPOT - You have an Azure SQL database. You are reviewing a slow performing query as shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: Live Query Statistics - Live Query Statistics as it a percentage of the execution. Box 2: Key Lookup - The use of a Key Lookup operator in a query plan indicates that the query might bene t from performance tuning. For example, query performance might be improved by adding a covering index. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/live-query-statistics?view=sql-server-ver15 https://docs.microsoft.com/en-us/sql/relational-databases/showplan-logical-and-physical-operators-reference 128/281 Question #15 Topic 4 You have an Azure SQL managed instance. You need to gather the last execution of a query plan and its runtime statistics. The solution must minimize the impact on currently running queries. What should you do? A. Generate an estimated execution plan. B. Generate an actual execution plan. C. Run sys.dm_exec_query_plan_stats. D. Generate Live Query Statistics. Correct Answer: C Reference: https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-plan-stats-transact-sql? view=sql-server-ver15 129/281 Question #16 Topic 4 HOTSPOT - You have an Azure SQL database named db1 on a server named server1. You use Query Performance Insight to monitor db1. You need to modify the Query Store con guration to ensure that performance monitoring data is available as soon as possible. Which con guration setting should you modify and which value should you con gure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: INTERVAL_LENGTH_MINUTES - INTERVAL_LENGTH_MINUTES de nes size of time window during which collected runtime statistics for query plans are aggregated and persisted. Every active query plan has at most one row for a period of time de ned with this con guration. Default: 60 - Box 2: 1 - Statistics Collection Interval (INTERVAL_LENGTH_MINUTES): De nes the level of granularity for the collected runtime statistic, expressed in minutes. The default is 60 minutes. Consider using a lower value if you require ner granularity or less time to detect and mitigate issues. Use SQL Server Management Studio or Transact-SQL to set a different value for Statistics Collection Interval: ALTER DATABASE [QueryStoreDB] SET QUERY_STORE (INTERVAL_LENGTH_MINUTES = 60); Reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store 130/281 Question #17 Topic 4 You have an Azure SQL Database managed instance. The instance starts experiencing performance issues. You need to identify which query is causing the issue and retrieve the execution plan for the query. The solution must minimize administrative effort. What should you use? A. SQL Pro ler B. Extended Events C. Query Store D. dynamic management views Correct Answer: D Use the dynamic management view sys.dm_exec_requests to track currently executing queries and the associated worker time. Incorrect: Not C: DMVs that track Query Store and wait statistics show results for only successfully completed and timed-out queries. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/identify-query-performance-issues Question #18 Topic 4 You have an Azure SQL database named DB1. You need to display the estimated execution plan of a query by using the query editor in the Azure portal. What should you do rst? A. Run the SET SHOWPLAN_ALL Transact-SQL statement. B. For DB1, set QUERY_CAPTURE_MODE of Query Store to All. C. Run the SET FORCEPLAN Transact-SQL statement. D. Enable Query Store for DB1. Correct Answer: A The SET SHOWPLAN_ALL command causes Microsoft SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed information about how the statements are executed and provides estimates of the resource requirements for the statements. Reference: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-showplan-all-transact-sql?view=sql-server-ver15 131/281 Question #19 Topic 4 HOTSPOT - You have an Azure SQL database. You have a query and the associated execution plan as shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: Key Lookup - The Key Lookup cost is 99% so that is the performance bottleneck. 132/281 Box 2: nonclustered index - The key lookup on the clustered index is used because the nonclustered index does not include the required columns to resolve the query. If you add the required columns to the nonclustered index, the key lookup will not be required. Question #20 Topic 4 You have an instance of SQL Server on Azure Virtual Machines that has a database named DB1. You plan to implement Azure SQL Data Sync for DB1. Which isolation level should you con gure? A. SERIALIZABLE B. SNAPSHOT C. READ UNCOMMITTED D. READ COMMITTED Correct Answer: B Data Sync general requirements include: * Snapshot isolation must be enabled for both Sync members and hub. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-data-sync-data-sql-server-sql-database 133/281 Question #21 Topic 4 HOTSPOT - You have SQL Server on an Azure virtual machine. You review the query plan shown in the following exhibit. For each of the following statements, select yes if the statement is true. Otherwise, select no. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: No - There is only one query plan available. Force has no effect. 134/281 Box 2: No - Adding an index will not increase IO usage. Box 3: Yes - The performance would improve. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store Question #22 Topic 4 A data engineer creates a table to store employee information for a new application. All employee names are in the US English alphabet. All addresses are locations in the United States. The data engineer uses the following statement to create the table. You need to recommend changes to the data types to reduce storage and improve performance. Which two actions should you recommend? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Change Salary to the money data type. B. Change PhoneNumber to the oat data type. C. Change LastHireDate to the datetime2(7) data type. D. Change PhoneNumber to the bigint data type. E. Change LastHireDate to the date data type. Correct Answer: AE A: Money takes less space compared to VARCHAR(20) E: Date takes less space compared to Datetime. Reference: https://docs.microsoft.com/eN-Us/sql/t-sql/data-types/data-types-transact-sql 135/281 Question #23 Topic 4 You have an Azure SQL database. You identify a long running query. You need to identify which operation in the query is causing the performance issue. What should you use to display the query execution plan in Microsoft SQL Server Management Studio (SSMS)? A. Live Query Statistics B. an estimated execution plan C. an actual execution plan D. Client Statistics Correct Answer: A SQL Server Management Studio provides the ability to view the live execution plan of an active query. This live query plan provides real-time insights into the query execution process as the controls ow from one query plan operator to another. The live query plan displays the overall query progress and operator-level run-time execution statistics such as the number of rows produced, elapsed time, operator progress, etc. Because this data is available in real time without needing to wait for the query to complete, these execution statistics are extremely useful for debugging query performance issues. Reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/live-query-statistics Question #24 Topic 4 You have a version-8.0 Azure Database for MySQL database. You need to identify which database queries consume the most resources. Which tool should you use? A. Query Store B. Metrics C. Query Performance Insight D. Alerts Correct Answer: A The Query Store feature in Azure Database for MySQL provides a way to track query performance over time. Query Store simpli es performance troubleshooting by helping you quickly nd the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in the mysql schema database in the Azure Database for MySQL instance. Reference: https://docs.microsoft.com/en-us/azure/mysql/concepts-query-store 136/281 Question #25 Topic 4 You have an Azure subscription that contains an Azure SQL database. The database contains a table named table1. You execute the following Transact-SQL statements. You need to reduce the time it takes to perform analytic queries on the database. Which con guration should you enable? A. ROW_MODE_MEMORY_GRANT_FEEDBACK B. BATCH_MODE_MEMORY_GRANT_FEEDBACK C. BATCH_MODE_ADAPTIVE_JOINS D. BATCH_MODE_ON_ROWSTORE Correct Answer: D 137/281 Question #26 Topic 4 DRAG DROP - You create a new Azure SQL managed instance named SQL1 and enable Database Mail extended stored procedures. You need to ensure that SQL Server Agent jobs running on SQL1 can notify administrators when a failure occurs. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Correct Answer: Question #27 Topic 4 You have a Microsoft SQL Server 2019 database named DB1 and an Azure SQL managed instance named SQLMI1. You need to move a SQL Server Agent job from DB1 to SQLMI1. Which job attribute is unsupported in SQLMI1? A. log to table B. email noti cations C. schedules D. output les Correct Answer: D 138/281 Question #28 Topic 4 DRAG DROP - You create an Azure SQL managed instance and a job that performs backups. You need to con gure the job to notify a distribution group by email when the job fails. The solution must minimize administrative effort. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Correct Answer: Question #29 Topic 4 You have an Azure SQL managed instance. You need to con gure the SQL Server Agent service to email job noti cations. Which statement should you execute? A. EXECUTE msdb.dbo.sysmail_add_pro le_sp @pro le_name = ‘sysadmin_dbmail_pro le’; B. EXECUTE msdb.dbo.sysmail_add_pro le_sp @pro le_name = ‘application_dbmail_pro le’; C. EXECUTE msdb.dbo.sysmail_add_pro le_sp @pro le_name = ‘AzureManagedInstance_dbmail_pro le’; D. EXECUTE msdb.dbo.sysmail_add_pro le_sp @pro le_name = ‘sys_dbmail_pro le’; Correct Answer: C 139/281 Topic 5 - Question Set 5 Question #1 Topic 5 HOTSPOT - You have an Azure Data Factory instance named ADF1 and two Azure Synapse Analytics workspaces named WS1 and WS2. ADF1 contains the following pipelines: ✑ P1: Uses a copy activity to copy data from a nonpartitioned table in a dedicated SQL pool of WS1 to an Azure Data Lake Storage Gen2 account ✑ P2: Uses a copy activity to copy data from text-delimited les in an Azure Data Lake Storage Gen2 account to a nonpartitioned table in a dedicated SQL pool of WS2 You need to con gure P1 and P2 to maximize parallelism and performance. Which dataset settings should you con gure for the copy activity of each pipeline? To answer, select the appropriate options in the answer area. Hot Area: Correct Answer: P1: Set the Partition option to Dynamic Range. The SQL Server connector in copy activity provides built-in data partitioning to copy data in parallel. P2: Set the Copy method to PolyBase Polybase is the most e cient way to move data into Azure Synapse Analytics. Use the staging blob feature to achieve high load speeds from all types of data stores, including Azure Blob storage and Data Lake Store. (Polybase supports Azure Blob storage and Azure Data Lake Store by default.) Reference: https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse https://docs.microsoft.com/en-us/azure/data- factory/load-azure-sql-data-warehouse 140/281 Question #2 Topic 5 You have the following Azure Data Factory pipelines: ✑ Ingest Data from System1 Ingest Data from System2 - ✑ Populate Dimensions ✑ Populate Facts Ingest Data from System1 and Ingest Data from System2 have no dependencies. Populate Dimensions must execute after Ingest Data from System1 and Ingest Data from System2. Populate Facts must execute after the Populate Dimensions pipeline. All the pipelines must execute every eight hours. What should you do to schedule the pipelines for execution? A. Add a schedule trigger to all four pipelines. B. Add an event trigger to all four pipelines. C. Create a parent pipeline that contains the four pipelines and use an event trigger. D. Create a parent pipeline that contains the four pipelines and use a schedule trigger. Correct Answer: D Reference: https://www.mssqltips.com/sqlservertip/6137/azure-data-factory-control- ow-activities-overview/ Question #3 Topic 5 You have an Azure Data Factory pipeline that performs an incremental load of source data to an Azure Data Lake Storage Gen2 account. Data to be loaded is identi ed by a column named LastUpdatedDate in the source table. You plan to execute the pipeline every four hours. You need to ensure that the pipeline execution meets the following requirements: ✑ Automatically retries the execution when the pipeline run fails due to concurrency or throttling limits. ✑ Supports back lling existing data in the table. Which type of trigger should you use? A. tumbling window B. on-demand C. event D. schedule Correct Answer: A The Tumbling window trigger supports back ll scenarios. Pipeline runs can be scheduled for windows in the past. Incorrect Answers: D: Schedule trigger does not support back ll scenarios. Pipeline runs can be executed only on time periods from the current time and the future. Reference: https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers 141/281 Question #4 Topic 5 You have an Azure Data Factory that contains 10 pipelines. You need to label each pipeline with its main purpose of either ingest, transform, or load. The labels must be available for grouping and ltering when using the monitoring experience in Data Factory. What should you add to each pipeline? A. an annotation B. a resource tag C. a run group ID D. a user property E. a correlation ID Correct Answer: A Azure Data Factory annotations help you easily lter different Azure Data Factory objects based on a tag. You can de ne tags so you can see their performance or nd errors faster. Reference: https://www.techtalkcorner.com/monitor-azure-data-factory-annotations/ 142/281 Question #5 Topic 5 HOTSPOT - You have an Azure data factory that has two pipelines named PipelineA and PipelineB. PipelineA has four activities as shown in the following exhibit. PipelineB has two activities as shown in the following exhibit. You create an alert for the data factory that uses Failed pipeline runs metrics for both pipelines and all failure types. The metric has the following settings: ✑ Operator: Greater than ✑ Aggregation type: Total ✑ Threshold value: 2 ✑ Aggregation granularity (Period): 5 minutes ✑ Frequency of evaluation: Every 5 minutes Data Factory monitoring records the failures shown in the following table. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Correct Answer: Box 1: No - Just one failure within the 5-minute interval. Box 2: No - Just two failures within the 5-minute interval. Box 3: No - 143/281 Just two failures within the 5-minute interval. Reference: https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-metric-overview Question #6 Topic 5 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Data Lake Storage account that contains a staging zone. You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics. Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes mapping data ow, and then inserts the data into the data warehouse. Does this meet the goal? A. Yes B. No Correct Answer: B Correct solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks notebook, and then inserts the data into the data warehouse. Reference: https://docs.microsoft.com/en-US/azure/data-factory/transform-data Question #7 Topic 5 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Data Lake Storage account that contains a staging zone. You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics. Solution: You schedule an Azure Databricks job that executes an R notebook, and then inserts the data into the data warehouse. Does this meet the goal? A. Yes B. No Correct Answer: B Must use an Azure Data Factory, not an Azure Databricks job. Correct solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks notebook, and then inserts the data into the data warehouse. Reference: https://docs.microsoft.com/en-US/azure/data-factory/transform-data 144/281 Question #8 Topic 5 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Data Lake Storage account that contains a staging zone. You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics. Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks notebook, and then inserts the data into the data warehouse. Does this meet the goal? A. Yes B. No Correct Answer: A An Azure Data Factory can trigger a Databricks notebook. Reference: https://docs.microsoft.com/en-us/azure/data-factory/transform-data-using-databricks-notebook Question #9 Topic 5 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Data Lake Storage accou

Use Quizgecko on...
Browser
Browser